How to Extract Web Data using Node.js?

In this blog, we’ll discover how to use Node.js and its packages to do a fast and effective data scraping for single-page applications. This will help us get and utilize vital data, which isn’t always available using APIs.

Our achievements in the field of business digital transformation.

Arrow

In this blog, we’ll find out how to utilize Node.js as well as its packages for doing a quick and efficient data extraction for single-page applications. It will help us collect and use important data that isn’t always accessible using APIs. Let’s go through it.

Tip: Sharing and Reusing JS Modules using bit.dev

Utilize Bit for summarizing components or modules with all the setup and dependencies. Share them using Bit’s cloud, work together with the team as well as utilize them anywhere.

What is Web Data Extraction?

Web data extraction is a method used for scraping data from websites with a script. Data scraping is a way of automating the difficult task of copying data from different websites.

Generally, web Scraping is performed when the desired websites don’t render the API to fetch data. Some general data scraping scenarios include:

  • Extracting emails from different websites for the sales leads.
  • Extracting news headlines from different news websites.
  • Extracting product’s data from different e-commerce sites.

 

Why do we require web scraping while e-commerce sites expose APIs (Product Advertising APIs) to fetch or collect product’s data?

E-Commerce sites only uncover some of the product’s data for fetching through APIs so, web scraping is a more efficient way of collecting maximum product data.

Product comparison websites normally do data scraping. Even Google does scraping and crawling to index search results.

What Would We Want?

Started with data scraping is easy as well as it is divided in two easy parts:

  • Extracting data by doing an HTTP request
  • Scraping important data through parsing HTML DOM

 

We would be utilizing Node.js for data scraping. We would also utilize two open-source npm modules:

  • Axios – It is a promise-based HTTP client for browser as well as node.js.
  • Cheerio —Cheerio makes that easy to choose, edit, as well as view DOM components.

 

You may learn more regarding comparing well-known HTTP request libraries.

Tip: Don’t duplicate the common code. Utilize tools like Bit for organizing, sharing, and discovering components for apps to create quicker. Just take one look.

Setup

The setup is very easy. We make a new folder as well as run the command within the folder to make a package.json file. Let’s make a recipe for making the food delicious.

    
     npm init -y

    
   

Before starting cooking, let’s get ingredients for the recipe. Add Cheerio and Axios from npm like our dependencies.

    
     npm install axios cheerio

    
   

Then, use them in the `index.js` file

    
     const axios = require('axios');
const cheerio = require('cheerio');
    
   

Making a Request

After collecting all the ingredients, let’s begin our cooking. We are extracting data from a HackerNews site for which we have to make the HTTP request for getting website content. And that’s where axios has a role to play.

Our answer will appear like this —

    
     <html op="news">
<head>
<meta name="referrer" content="origin">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" type="text/css" href="news.css?oq5SsJ3ZDmp6sivPZMMb">
<link rel="shortcut icon" href="favicon.ico">
<link rel="alternate" type="application/rss+xml" title="RSS" href="rss">
<title>Hacker News</title>
</head>
<body>
<center>
<table id="hnmain" border="0" cellpadding="0" cellspacing="0" width="85%" bgcolor="#f6f6ef">
.
.
.
</body>
<script type='text/javascript' src='hn.js?oq5SsJ3ZDmp6sivPZMMb'></script>
</html>
    
   

We are collecting related HTML content that we find while making the request from browsers like Chrome. Then, we want some help from Chrome Developer Tools for searching through HTML of the webpage as well as choose the necessary data.

We need to extract News headings as well as its related links. You could view HTML of a webpage through right-clicking on a webpage as well as choosing “Inspect”.

Parse with HTML using Cheerio.js

Cheerio is a jQuery for Node.js, where we utilize selectors to choose tags of the HTML document. A selector syntax got borrowed from the jQuery. With Chrome DevTools, we have to get selector for different news headlines as well as their links. Let’s add a few spices in the food.

 

Initially, we have to load the HTML. The step in the jQuery is implied as jQuery works on one, supported-in DOM. Using Cheerio, we want to pass the HTML documents. After loading an HTML, we repeat all the table row incidences to scrape every news available on a page.

The result will appear like:

 

    
     [
  {
    title: 'Malaysia seeks $7.5B in reparations from Goldman Sachs (reuters.com)',
    link: 'https://www.reuters.com/article/us-malaysia-politics-1mdb-goldman/malaysia-seeks-7-5-billion-in-reparations-from-goldman-sachs-ft-idUSKCN1OK0GU'
  },
  {
    title: 'The World Through the Eyes of the US (pudding.cool)',
    link: 'https://pudding.cool/2018/12/countries/'
  },
  .
  .
  .
]
    
   

As we have a whole an array of JavaScript Object having title as well as links of news from a HackerNews site. Here, we can extract data from different large number websites. Therefore, our food gets prepared as well as looks wonderful too.

Conclusion

In this blog, we initially understood what web scraping is as well as how we can utilize it to automate different operations to collect data from different websites.

A lot of websites are utilizing Single Page Application (SPA) architecture for generating content dynamically for their websites with JavaScript. We would get response from initial HTTP requests as well as can’t implement the JavaScript to render dynamic content with axios as well as other parallel npm packages like requests. Therefore, we can extract data from static sites only.

For more information, contact 3i Data Scraping or ask for a free quote!

What Will We Do Next?

  • Our representative will contact you within 24 hours.

  • We will collect all the necessary requirements from you.

  • The team of analysts and developers will prepare estimation.

  • We keep confidentiality with all our clients by signing NDA.

Tell us about Your Project




    Please prove you are human by selecting the star.