Package Torello.HTML.Tools.NewsSite
Class ScrapeArticles
- java.lang.Object
-
- Torello.HTML.Tools.NewsSite.ScrapeArticles
-
public class ScrapeArticles extends java.lang.Object
ScrapeArticles - Documentation.
This class simple runs a download on each article URL that is passed to it. It provides a simple mechanism for storing and saving the articles that it finds to the file-system.
Example:1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
// This builds an "Article Getter" Each news-article on the web-site is wrapped in a // <DIV CLASS="content ..."> HTML Divider Element. This is how to retrieve the article-body. ArticleGet getter = ArticleGet.usual("div", "class", TextComparitor.EQ, "content"); // Save the state of the download, just in case. Use the standardized "File System Pause" class // by calling the factory-builder method 'getFSInstance' - and provide a simple file-name where // the state may be saved. The file will be under 1 kb. Pause pause = Pause.getFSInstance("state.dat"); // Load the already downloaded news web-site article URL's retrieved from Vector<Vector<String>> articleURLs = (Vector<Vector<String>>) FileRW.readObjectFromFileNOCNFE ("urls.vdat", Vector.class, true); // Use the standard factory provided "ScrapedArticleReceiver" This method will return // a receiver that sends data-files to the file-system directory 'chineseNewsBoard/' on the // local file-system. ScrapedArticleReceiver receiver = ScrapedArticleReceiver.saveToFS("chineseNewsBoard/"); // Make sure to call initialize, and then start the article downloading process. pause.initialize(); ScrapeArticles.download(receiver, articleURLs, getter, true, null, false, pause, sw);
Hi-Lited Source-Code:
- View Here: Torello/HTML/Tools/NewsSite/ScrapeArticles.java
- Open New Browser-Tab: Torello/HTML/Tools/NewsSite/ScrapeArticles.java
Stateless Class:
This class neither contains any program-state, nor can it be instantiated.
The
The
@StaticFunctional
Annotation may also be called 'The Spaghetti Report'- 1 Constructor(s), 1 declared private, zero-argument constructor
- 1 Method(s), 1 declared static
- 0 Field(s)
-
-
Method Summary
All Methods Static Methods Concrete Methods Modifier and Type Method static Vector<Vector<DownloadResult>>
download(ScrapedArticleReceiver articleReceiver, Vector<Vector<String>> articleURLs, ArticleGet articleGetter, boolean skipArticlesWithoutPhotos, StrFilter bannerAndAdFinder, boolean keepOriginalPageHTML, Pause pause, Appendable log)
-
-
-
Method Detail
-
download
public static java.util.Vector<java.util.Vector<DownloadResult>> download (ScrapedArticleReceiver articleReceiver, java.util.Vector<java.util.Vector<java.lang.String>> articleURLs, ArticleGet articleGetter, boolean skipArticlesWithoutPhotos, StrFilter bannerAndAdFinder, boolean keepOriginalPageHTML, Pause pause, java.lang.Appendable log) throws PauseException, ReceiveException, java.io.IOException
This is used to do the downloading of newspaper articles.- Parameters:
articleReceiver
- This is an instance ofScrapedArticleReceiver
. Whenever anArticle
has successfully downloaded, it will be passed to this 'receiver' class. There is a pre-written, standardScrapedArticleReceiver
that writes to a directory on the file-system asArticle's
are downloaded. If there is a need to transmit downloadedArticle's
elsewhere, implement thatinterface
, and provide an instance of it to this parameter.articleURLs
- this is a parameter that should have been generated by a call to method:ScrapeURLs.getArticleURLs(...)
articleGetter
- This is basically a "Post-Processor" for HTML Web-based newspaper articles. This parameter cannot be null. It is just a simple, one-line, lambda-predicate which needs to be implemented by the programmer. Internet news websites (such as:news.yahoo.com, cnn.com
, andgov.cn
) have News-Articles on pages that contain a lot of extraneous and advertising links and content. This parameter needs to extract theArticle
-body content from the rest of the page. This is usually very trivial, but it is also mandatory. Read about theclass ArticleGet
for more information about extracting the news-content from a NewspaperArticle
web-page.skipArticlesWithoutPhotos
- This may be TRUE, and if it is - articles that contain only textual content will be skipped. This can be useful for foreign-news sources where the reader is usually working-harder to understand the content in the first place. This class is primarily used with foreign-news content websites. As such, staring at pages of Mandarin Chinese or Spanish is usually a lot easier if there is at least one photo on the page. This parameter allows users to skip highly dense articles that do not contain at least one picture.bannerAndAdFinder
- This parameter may be null, but if it is not, it will be used to skip banner-advertisement images. This parameter, in reality, does very little. It will not actually be used to eliminated advertising images - but rather only to identify when an image is a banner, advertisement, or spurious picture. Since this is a news web-site scraping Java Package, there is a part that allows a user to require that only news paper articles that contain a photo be downloaded - and the real purpose of including the'bannerAndAdFinder'
is to allow the scrape mechanism to 'skip' articles whose only photos are advertisements.
NOTE: Again, the primary impetus for developing these tools was for scraping and translating news articles from foreign countries like Spain, China, and parts of South America. It could be used for any news-source desired. When reading foreign language text - it helps "a little bit more" to see a picture. This parameter is solely used for that purpose.
PRODUCT ADVERTISEMENTS & FACEBOOK / TWITTER LINKS: Removing actual links about "pinning to Reddit.com" or "Tweeting" articles can be done using either:-
ArticleGet
- Writing an instance ofArticleGet
that NOT ONLY extracts the body of a newspaper-article, BUT ALSO performs HTML cleanup using the'Remove'
method of the NodeSearch Package. -
HTMLModifier
- Writing a "cleaner" version of theHTMLModifier
lambda expression /Function Interface
can also use the NodeSearch classes for removing annoying commercials - or buttons about "Sharing a link on Facebook." The classToHTML
provides a window for accepting an instance ofHTMLModifier
when converting the generated serialized-data HTMLVector's
into'.html' index
files.
-
keepOriginalPageHTML
- When this is TRUE, the original page html will be stored in the result set. When this is FALSE null shall be stored in place of the original page data.
NOTE: The original page HTML is the source HTML that is fed into theArticleGet
lambda. It contains the "pre-processed HTML."pause
- If there are many / numerous articles to download, pass an instance ofclass Pause
, and intermediate progress can be saved, and reloaded at a later time.log
- This parameter may not be null, or aNullPointerException
shall throw. As articles are downloaded, notices shall be posted to this'log'
by this method. This parameter expects an implementation of Java'sinterface java.lang.Appendable
which allows for a wide range of options when logging intermediate messages.Class or Interface Instance Use & Purpose 'System.out'
Sends text to the standard-out terminal Torello.Java.StorageWriter
Sends text to System.out
, and saves it, internally.FileWriter, PrintWriter, StringWriter
General purpose java text-output classes FileOutputStream, PrintStream
More general-purpose java text-output classes
IMPORTANT: Theinterface Appendable
requires that the check exceptionIOException
must be caught when using itsappend(CharSequence)
methods.- Returns:
- A
Vector
that is exactly parallel to the inputVector<Vector<String>> articleURLs
will be returned. Each element of each of the sub-Vector's
in this two-dimensionalVector
will have an instance of the enumerated-type'DownloadResult'
. The constant-value in'DownloadResult'
will identify whether or not theArticle
pointed to by theURL
at thatVector
-location successfully downloaded.
If the download failed, then the value of theenum 'DownloadResult'
will be able to identify the error that occurred when attempting to scrape a particular news-storyURL
- Throws:
PauseException
- If there is an error when attempting to save the download state.ReceiveException
- If there are any problems with theScrapedArticleReceiver
NOTE: AReceiveException
implies that the user's code has failed to properly handle or save an instance ofArticle
that has downloaded, successfully, by thisclass ScrapeArticles
. AReceiveException
will halt the download process immediately, and download state will be saved if the user has provided a reference to thePause
parameter.
NOTE: Other internally caused download-exceptions will be handled and logged (without halting the entire download-process) - and downloading will continue. A note about the internally-produced exception will be printed to the log, and an appropriate instance ofenum DownloadResult
will be put in the returnVector
.java.io.IOException
- This exception is required for any method that uses Java'sinterface java.lang.Appendable
. Here, the'Appendable'
is the log, and if writing to this user provided'log'
produces an exception, then download progress will halt immediately, and download state will be saved if the user has provided a reference to thePause
parameter.- Code:
- Exact Method Body:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293
log.append( "\n" + C.BRED + "*****************************************************************************************\n" + "*****************************************************************************************\n" + C.RESET + " Downloading Articles" + C.BRED + "\n" + "*****************************************************************************************\n" + "*****************************************************************************************\n" + C.RESET + '\n' ); // The loop variables, and the return-result Vector. int outerCounter = 0; int innerCounter = 0; int successCounter = 0; boolean firstIteration = true; Vector<Vector<DownloadResult>> ret = null; URL url = null; Runtime rt = Runtime.getRuntime(); // If the user has passed an instance of 'pause' then it should be loaded from disk. if (pause != null) { Ret4<Vector<Vector<DownloadResult>>, Integer, Integer, Integer> r = pause.loadState(); ret = r.a; outerCounter = r.b.intValue(); innerCounter = r.c.intValue(); successCounter = r.d.intValue(); } // If the user did not provide a "Pause" mechanism, **OR** the "Pause Mechanism" asserts // that the download process is starting from the beginning of the article-URL Vector, // THEN a *new vector* should be built. if ((pause == null) || ((outerCounter == 0) && (innerCounter == 0) && (successCounter == 0))) { // Need to instantiate a brand new return vector. The downloader is starting over // at the beginning of the Article URL list. ret = new Vector<>(articleURLs.size()); // Initializes the capacity (sizes) of the two-dimensional "Return Vector." // NOTE: The return Vector is exactly parallel to the input "articleURLs" two-dimensional // input Vector. for (int i=0; i < articleURLs.size(); i++) ret.add(new Vector<DownloadResult>(articleURLs.elementAt(i).size())); } for (; outerCounter < articleURLs.size(); outerCounter++) { // System.out.println("outerCounter=" + outerCounter + ", innerCounter=" + innerCounter + ", articleURLs.size()=" + articleURLs.size()); // System.out.println("articleURLs.elementAt(" + outerCounter + ").size()=" + articleURLs.elementAt(outerCounter).size()); for ( innerCounter = (firstIteration ? innerCounter : 0); innerCounter < articleURLs.elementAt(outerCounter).size(); innerCounter++ ) try { firstIteration = false; String urlStr = articleURLs.elementAt(outerCounter).elementAt(innerCounter); // ******************************************************************************* // Instantiate the URL object from the URLStr String. // ******************************************************************************* // Should never happen, because each URL will have been tested / instantiated in the previous method. try { url = new URL(urlStr); } catch (Exception e) { log.append("Could not instantiate URL-String into URL for [" + urlStr + "].\n"); ret.elementAt(outerCounter).add(DownloadResult.BAD_ARTICLE_URL); continue; } // ******************************************************************************* // Run the Garbage Collector, Print Article URL and Number to log. // ******************************************************************************* rt.gc(); String freeMem = StringParse.commas(rt.freeMemory()); String totalMem = StringParse.commas(rt.totalMemory()); log.append( "\nVisiting URL: [" + C.YELLOW + StringParse.zeroPad10e4(outerCounter) + C.RESET + " of " + StringParse.zeroPad10e4(articleURLs.size()) + ", " + C.YELLOW + StringParse.zeroPad10e4(innerCounter) + C.RESET + " of " + StringParse.zeroPad10e4(articleURLs.elementAt(outerCounter).size()) + "] " + C.CYAN + " - " + url + C.RESET + '\n' + "Available Memory: " + C.YELLOW + freeMem + C.RESET + '\t' + "Total Memory: " + C.YELLOW + totalMem + C.RESET + '\n' ); // ******************************************************************************* // Scrape the web-page // ******************************************************************************* int retryCount = 0; Vector<HTMLNode> page = null; while ((page == null) && (retryCount < 5)) try { page = HTMLPageMWT.getPageTokens(15, TimeUnit.SECONDS, url, false); } catch (Exception e) { log.append(HTTPCodes.convertMessageVerbose(e, url, 1) + '\n'); retryCount++; } // ******************************************************************************* // Verify the results of scraping the web-page // ******************************************************************************* if (page == null) { log.append(C.BRED + "\tArticle could not download, max 5 retry counts." + C.RESET + '\n'); ret.elementAt(outerCounter).add(DownloadResult.COULD_NOT_DOWNLOAD); continue; } if (page.size() == 0) { log.append(C.BRED + "\tArticle was retrieved, but page-vector was empty" + C.RESET + '\n'); ret.elementAt(outerCounter).add(DownloadResult.EMPTY_PAGE_VECTOR); continue; } log.append("\tPage contains (" + C.YELLOW + page.size() + C.RESET + ") HTMLNodes.\n"); // ******************************************************************************* // Retrieve the HTML <TITLE> element from the page - if it has one. // ******************************************************************************* String title = Util.textNodesString(TagNodeGetInclusive.first(page, "title")); if (title.length() > 0) log.append("\tPage <TITLE> element is: " + C.YELLOW + title + C.RESET + '\n'); else log.append("\tPage has no <TITLE> element, or it was empty.\n"); // ******************************************************************************* // Use the Article Getter to get it, make sure to watch for exceptions. // ******************************************************************************* Vector<HTMLNode> article = null; try { article = articleGetter.apply(url, page); } catch (ArticleGetException e) { log.append( C.BRED + "\tArticleGet.apply(...) failed: " + e.getMessage() + C.RESET + "\nException Cause Chain:\n" + EXCC.toString(e) + '\n' ); ret.elementAt(outerCounter).add(DownloadResult.ARTICLE_GET_EXCEPTION); continue; } // ******************************************************************************* // Verify the results of article get // ******************************************************************************* if (article == null) { log.append(C.BRED + "\tContent-body not found by ArticleGet.apply(...)\n" + C.RESET); ret.elementAt(outerCounter).add(DownloadResult.ARTICLE_GET_RETURNED_NULL); continue; } if (article.size() == 0) { log.append(C.BRED + "\tContent-body not found by ArticleGet.apply(...)\n" + C.RESET); ret.elementAt(outerCounter).add(DownloadResult.ARTICLE_GET_RETURNED_EMPTY_VECTOR); continue; } log.append("\tArticle body contains (" + C.YELLOW + article.size() + C.RESET + ") HTMLNodes.\n"); // ******************************************************************************* // Retrieve the positions of the images // ******************************************************************************* int[] imagePosArr = InnerTagFind.all(article, "img", "src", (String src) -> ! StrCmpr.startsWithXOR_CI(src.trim(), "data:")); Vector<URL> imageURLs = Links.resolveSRCs(article, imagePosArr, url); if (skipArticlesWithoutPhotos && (imageURLs.size() == 0)) { log.append(C.BRED + "\tArticle content contained 0 HTML IMG elements" + C.RESET + '\n'); ret.elementAt(outerCounter).add(DownloadResult.NO_IMAGES_FOUND); continue; } log.append("\tArticle contains (" + C.YELLOW + imageURLs.size() + C.RESET + ") image TagNodes.\n"); // ******************************************************************************* // Check the banner-situation. Count all images, and less that number by the number of "banner-images" // ******************************************************************************* int imageCount = imageURLs.size(); if (bannerAndAdFinder != null) for (int pos : imagePosArr) if (bannerAndAdFinder.test(((TagNode) article.elementAt(pos)).AV("src"))) imageCount--; if (skipArticlesWithoutPhotos && (imageCount == 0)) { log.append(C.BRED + "\tAll images inside article were banner images" + C.RESET + '\n'); ret.elementAt(outerCounter).add(DownloadResult.NO_IMAGES_FOUND_ONLY_BANNERS); continue; } if (bannerAndAdFinder != null) log.append("\tArticle contains (" + C.YELLOW + imageCount + C.RESET + ") non-banner image TagNodes.\n"); // ******************************************************************************* // Write the results to the output file // ******************************************************************************* Article articleResult = new Article (url, title, (keepOriginalPageHTML ? page : null), article, imageURLs, imagePosArr); // The article was successfully downloaded and parsed. Send it to the "Receiver" and // add DownloadResult to the return vector. log.append(C.GREEN + "ARTICLE LOADED." + C.RESET + " Sending to ScrapedArticleReceiver.\n"); articleReceiver.receive(articleResult, outerCounter, innerCounter); ret.elementAt(outerCounter).add(DownloadResult.SUCCESS); successCounter++; } catch (ReceiveException re) { // NOTE: If there was a "ReceiveException" then article-downloading must be halted // immediately. A ReceiveException implies that the user did not properly handle // the downloaded Article, and the user's code would have to be debugged. log.append( "There was an error when attempting to pass the downloaded article to the " + "ArticleReceiver. Unrecoverable. Saving download state, and halting download.\n" ); // Make sure to save the internal download state if (pause != null) pause.saveState(ret, outerCounter, innerCounter, successCounter); // Make sure to stop the download process now. If the article "Receiver" failed to // save or store a received-article, there is NO POINT IN CONTINUING THE DOWNLOADER. // NOTE: This will cause the method to exit with error, make sure to stop the "MWT Thread" // Remember, this is just a simple "Monitor Thread" that prevents a download // from hanging. HTMLPageMWT.shutdownMWTThreads(); throw re; } catch (IOException ioe) { // This exception occurs if writing the "Appendable" (the log) fails. If this // happens, download should halt immediately, and the internal-state should be // saved to the 'pause' variable. if (pause != null) pause.saveState(ret, outerCounter, innerCounter, successCounter); // Need to stop the download process. IOException could ONLY BE the result of the // "Appendable.append" method. None of the other commands throw IOException. // ALSO: If the "Appendable log" never fails (which is 99% likely not to happen), // This catch-statement will never actually execute. However, if Appendable.append // did, in fact, fail - then downloading cannot continue; // NOTE: This will cause the method to exit with error, make sure to stop the "MWT Thread" // Remember, this is just a simple "Monitor Thread" that prevents a download from // hanging. HTMLPageMWT.shutdownMWTThreads(); throw ioe; } catch (Exception e) { // ******************************************************************************* // Handle "Unknown Exception" case. // ******************************************************************************* log.append( "There was an unknown Exception:\n" + EXCC.toString(e) + "\nSkipping URL: " + url + '\n' ); ret.elementAt(outerCounter).add(DownloadResult.UNKNOWN_EXCEPTION); } finally { // ******************************************************************************* // Write the current "READ STATE" information (two integers) // ******************************************************************************* if (pause != null) pause.saveState(ret, outerCounter, innerCounter, successCounter); } } log.append( C.BRED + "*****************************************************************************************\n" + C.RESET + "Traversing Site Completed.\n" + "Loaded a total of (" + successCounter + ") articles.\n" ); // Returns the two-dimensional "Download Result" Vector // Make sure to stop the "Max Wait Time Threads" HTMLPageMWT.shutdownMWTThreads(); return ret;
-
-