Bots, software that runs automated tasks over the internet with minimal or no human intervention, first appeared online in 1998, providing services to users of the Internet Relay Chat messaging system. Twenty years later, bots are seen as responsible for influencing the results of the 2016 U.S. presidential election and causing crises at two of the most popular social media brands.
While the world comes to grips with bad bots’ new, high-profile role in hijacking social media platforms and potentially thwarting democracy, it’s easy to forget that these nefarious software scripts also are wreaking havoc on much broader swaths of the economy; legislation is only just beginning to catch up. Until it does, bad bots are running amok on the internet without any legal ramifications.
In this eWEEK Data Point article, Reid Tatoris, Vice-President of Product Marketing and Outreach at Distil Networks, explains nine ways that he believes bots are ruining the internet without breaking the law.
Data Point No. 1: Exploiting new account promotions
Businesses often have new account promotions (i.e. offering some sort of reward for signing up) to grow their user base. Bots game the system by registering thousands of new accounts per hour, collecting the reward points, and then transferring the points to a single account or selling the account login credentials on the dark web. This problem is particularly common in the online gaming and gambling, which has a higher proportion of bad bot traffic (51.3 percent) than any other industry.
Data Point No. 2: Airline ticket bots
The Better Online Ticket Sales (BOTS) Act makes online ticket scalping illegal, but the same type bots are used in other industries, such as air travel. Airline ticket bots are used to fraudulently reserving blocks of seats on flights, causing the price of the remaining unsold seats to increase dramatically, throwing off airline ticket sales. After gambling, the airline industry has the second-highest proportion of bad bot traffic (43.9 percent).
Data Point No. 3: Content scraping
Bad bots scrape content from a website with the intent of using it for purposes outside the site owner’s control. This activity isn’t illegal by itself; after all, you could scrape your own website without a hitch. Startups love it because it’s a cheap and powerful way to gather data without the need for partnerships.
But on the flip side, bad actors can steal content and publish it in their names, attempting to take credit for the content, and others will publish duplicate content in attempt to do harm to the original publisher. When content is duplicated across the web, Google demotes duplicate pages, resulting in diminished SEO rankings, as well as lost advertiser revenue.
Data Point No. 4: Price and product scraping
Bad actors and competitors seek to scrape information from legitimate online retail sites to gain product and pricing intelligence that can be used to undercut their pricing or position against their offerings. Whether termed “price scrapers,” “pricing bots” or “pricing intelligence solutions,” an entire industry has grown around the use of automated bots dedicated to scraping as much data as possible from online retailers’ websites. If a retailer loses enough competitive pricing wars fueled by price scraping bots, he risks going out of business. In 2017, 97 percent of websites with unique content or product and pricing information were hit with scraper bots.
Data Point No 5: Form spam
Form submission spam occurs when someone fills out a form such as “contact us” with fake information. Form spam occurs when spam bots flood online forms with phony submissions–ones filled with keywords, jargon-stuffed backlinks and useless information that doesn’t create leads or foster potential sales. Form spam wastes company resources by inundating staff’s email inboxes and forcing them to spend time manually sorting the legitimate ones from the fake ones. It can also damage marketing campaigns by causing legitimate customer leads to get lost in the shuffle or deleted by mistake.
Data Point No. 6: Comment spam
Bots post inappropriate activities or links to sites in the comments section of a company’s website. Comment spam bots fill your site with low-quality backlinks that hurt your SEO rankings, or worse, malicious links that prompt site visitors to click on it and download malware. Additionally, moderators have to spend hours manually cleaning up the spam, resulting in a huge waste of company resources.
Data Point No. 7: Rogue reviews
Rogue reviews are another damaging type of spam, where negative reviews of products, reviews pointing to competitor sites or products and comments containing links to malware are posted on review sites, making the hosting site appear untrustworthy. This type of negative spamming deteriorates the image of the brand, and also creates a bad user experience. Rogue reviews can be positive, too; unscrupulous companies may flood review sites with fake positive reviews to support the launch of a new product, hoping to boost rankings and eventually sales. Either way, these fake reviews make it harder for real customers to get accurate information and make informed purchasing decisions.
Data Point No. 8: Site slowdowns
One of the worst side effects of all this bad bot traffic is website slowdowns and downtime. This happens at layer seven, so it doesn’t affect firewalls or load balancers, but the web application and backend keel over. For many online businesses, website downtime correlates directly with a decline in sales. For example, if a retailer’s website goes down during the holidays, the consequences can be detrimental. Even if the added bot traffic isn’t enough to bring the site down, hogging bandwidth will cause slowdowns that damage the user experience for real customers. For example, a slow down in load time of a few seconds can lead to cart abandonment and decreased sales.
Data Point No. 9: Skewed analytics
Another negative side effect of bad bot traffic is that repeated link clicks, page requests and form submissions made by bots skew application-based metrics such as counts and measures of frequency and/or rate.
Marketers spend hours writing copy, designing pages, determining calls-to-action and exhaustively looking at metrics in order to optimize the user experience and glean insights about how a customer interacts with the website. Companywide decisions are made every day based upon this marketing insight, with the belief that they are “listening to the customer.”
If a company is unable to filter out this bad bot traffic or distinguish it from real human users that visit its website, that means that every A/B test, every conversion metric, every traffic analysis and every business decision made with this data is skewed.
If you have a suggestion for an eWEEK Data Point article, email [email protected].