Artificial Intelligence and the internet

Is artificial intelligence and the internet a good match? I have been thinking about this recently because of how prevalent it is. Is it making the internet easier and better to use? We are still at the early stages of AI development and some jobs need a human touch. Investments in AI are growing at pace but I sometimes wonder if this is a good thing.

7 Posts
Most Voted
Newest Oldest
Inline Feedback
View all posts
Anna Alonso
Potential
April 16, 2022 4:25 pm

Artificial intelligence and the internet is a good match. With 4.66 billion active internet users globally, we need technologies to manage the data and decisions of more than half the global population every day. Without artificial intelligence the internet would not exist as it is today. The internet’s growth is dependent on a strong, functioning AI base.

Others have already written about AI’s limitations and I agree with this. Faulty AI is costing businesses billions of dollars. Digital advertising is a case in point. When I make an internet purchase I start getting ads for the exact item I have already bought! Do businesses know they are paying for ad impressions on people who have no interest in buying their items? (because they already have it). We all understand there is a margin of error for these things. No one goes into digital advertising expecting laser precision targeting. Though when AI is not sophisticated enough to filter out people who have bought the product being advertised, gigantic amounts of ad spend is being wasted.

Just like said, social media AI can hurt your experience rather than enhance it. Facebook is determined to show me sales funnels ads. My finger might have slipped once and accidentally clicked on one of those ads. Now it’s all Facebook shows me. Now I’m not against ads per se. I don’t run an ad blocker and I regularly read ad copy. Facebook’s AI is taking me to the stage of despising ads by showing me something I’m clearly not interested in over and over again. I can see myself becoming an ad block convert is this continues, which means if this scenario is playing out among others: 1) the businesses paying for ads are losing out, and 2) the digital advertising industry loses out as a whole.

It sounds like I’m being too negative about AI. Don’t forget I started this post saying that AI and the internet is a good match, and I stand by that. Translation technology has been a life-saver during my travels. Of course AI translation technology cannot compare to a human translator or interpreter but it has improved leaps and bounds. A time will come when translation technology will match or even exceed the ability of a human. Don’t think it’ll happen? Remember, in 1996 Deep Blue beat reigning World Chess Champion, Gary Kasparov. People didn’t think that would happen either. AI is powerful. When harnessed properly it has massive potential to make the internet safer and more productive.

Jason Ng
Impact
April 13, 2022 1:35 pm

When you say artificial intelligence and the internet, it’s hard to separate ‘the internet’ from social media. Social media platforms have relied heavily on AI. Millions and even billions of users generate jobs for the platform that would realistically never be possible to complete if done through a ‘human touch’. Let’s look at the example of moderation.

Internet moderators or ‘mods’ have been around since the beginning of the internet. Forum mods were the glue that prevented internet forums back in the day from turning into a free for all of spam and abuse. Some let their hierarchical senior position in the forum community get to their heads and they abused their power, but by and large moderators were seen as a good thing. They have got a bad rap in recent years. The stereotypical image of a Reddit or Discord mod is an anti-social, unhealthy and perverted individual living in his mother’s basement. How accurate that is, I don’t know. I do know, however, that moderators are a much needed part of any internet community.

Many platforms have tried to replace human moderators with artificial intelligence… with mixed results. There are some videos, tweets and posts on YouTube, Twitter and Facebook respectively that clearly violate community guidelines, yet they are left alone. On the other hand some innocent content has been taken down without warning. Some people have worked for years on their YouTube channel, only to have their body of work removed because of a decision from an automated algorithm. AI performance isn’t where it needs to be and at the current stage of development, many mistakes will be made.

In my opinion I think AI should still be used for moderation, especially on large social media platforms. Mistakes will be made. This is frustrating, especially if you are penalized for uploading inappropriate content incorrectly. However, thinking of the psychological cost that human moderators have to endure, it’s AI all the way for me. The image below is taken from a Sunday Times article published on 31 October 2021, ‘The Job from Hell, Who’d be a Facebook moderator?’ Given what Facebook moderators have to look at daily, from violence to other harmful content, I’m not surprised to learn that it’s a job that has a mental toll.

In the article, one moderator describes it in a succinct way: “It’s just this endless mix of the worst of humanity.” Some platforms offer psychological counselling to their moderators. I think the job should be entirely shifted to AI. AI will get it wrong sometimes. However it will get it right many times as well. It is simply not worth putting humans through this “endless mix of the worst of humanity“. In the example of moderation, I feel AI and the internet is a good match and one that should be further encouraged.

Facebook moderator - Times article.jpg
Jenna T
Potential
May 22, 2022 3:51 am

I agree with the consensus on AI. It has brought major benefits to the internet industry. A lot happens behind the scenes that we take for granted while AI is whizzing away making our internet experience seamless. Nothing can be universally good, can it?

My rage moment with AI happened when Instagram changed their algorithm around 2016. If we are being fair, the rage should have been on the IG team that created and implemented the algo. Nonetheless I had some choice words about the algo change back then. Previously the posts of users I was following were presented in chronological order. Then around 2016 IG introduced an algorithm that showed you more of what your friends and family were posting.

This was a kick in the teeth to power users who had learnt the ins and outs of IG and relied on it for their income. It was also problematic in the message IG was sending: This robot will show you what you want to see, better than what you seek out yourself. The algo change hurt my IG experience. I could navigate IG perfectly when posts were shown in chronological order. I knew users who posted more often and I’d gloss by their posts. But when I saw a friend post who hadn’t posted in some time, I would take more time consuming their content. The algo change was frustrating. My thoughts were: Stop telling me what I want to see. I know what I like and I’ll find it!

I also noticed the algo removed content entirely from some users because it was supposedly learning what I liked. No surprise content creators on IG went ballistic. Imagine working so hard under a system to get a following, then suddenly without warning the system is changed. I liken it to studying for a math test, then come exam day, they reveal it will be a science test instead!

duongbui
Potential
April 17, 2022 5:14 pm

The big debate about AI is how powerful can it get before it becomes detrimental. There will come a time when robots stop listening to us and will enslave the human race! Bow down to our robot overlords! We laugh it this now even though I see elements of truth to this line of thinking. AI may well be in its early stage however it is powerful and in certain cases it is becoming uncontrollable.

In 2018, Facebook made a change to its algorithm designed to make the social media platform a friendlier place. The algorithm didn’t care what Mark Zuckerberg and Facebook’s corporate brass thought – it had the opposite effect. The Wall Street Journal’s Facebook Files reveal that the algorithm made Facebook users angrier. It was mainly divisive content that was going viral on Facebook, encouraging publishers to produce alter their content accordingly.

I’m seeing signs of Google losing control as well. @crtlaltdel You have written about SERPs, which has reminded me of what Google is trying to do. First and foremost, Google wants to present pages that provide a positive user experience. This is their bread and butter. Advice from Google in the past was to limit the number of ads a website owner should place on each page. Too much and it’ll compromise user experience. Google’s algorithm should be punishing site owners who do this. Instead it is becoming a frequent occurrence for high ranked pages to be full of ads. It’s a terrible user experience. The page could have the best content in the world but if it’s hard to read, I bounce. Speaking of which, the bounce rate on these sites must be sky high. So why is Google pushing these pages to the top of their SERPs? Sounds like Google is having a hard time getting its AI to listen.

In some aspects of the internet, I don’t know how relevant AI is. Reddit has 52 million daily active users. It is an internet powerhouse. But according to Google, it shouldn’t be doing too well because it fails its Core Web Vitals assessment! This is absurd to say the least. One of the most successful websites in existence failing criteria that is supposed to be indicative of web success.

AI and the internet? It’s getting uncontrollable and we are losing our grip.

Reddit Fails Core Web Vitals.png
Josef Lind
Influence
April 17, 2022 6:20 am

I don’t think it is a question of whether AI and the internet is a good match. The internet needs AI. Without AI, the internet will descend into chaos. I’m coming at this from a web development slant. Since search engines became a ‘thing’ in the mid 1990s, people have been trying to game the system. As the AI tech was primitive back then, people could take advantage of loopholes in search engine algorithms, resulting in high rankings in search results. Some people became very rich by doing this. They would offer services to others, promising to get their website ranked on the first page of these search engines. Other people would set up online stores, and knowing how to rank highly, would get a huge amount of sales. And in the 2000s when Adsense was new, some SEO-savvy people would launch websites and use SEO loopholes to rank highly and earn large amounts of ad revenue. We’re talking tens of thousands of dollars a month.

Google and other search engines have to play a cat and mouse game to ensure people can’t game the system. In fact their survival depends on it. Google became the world’s dominant search engine because of one simple fact. It was better than the others. Manipulation of rankings equals a poor user-experience. So while the AI tech has improved, in some ways things haven’t changed.

PageRank is Google’s AI tech that was developed by Larry Page and Sergey Brin at Stanford University in 1996. The basic idea was that the quality of a webpage could be determined by the number of links posting to it, often referred to as backlinks. A common term people use is a ‘vote of confidence’. If a website was linking to your webpage, one could infer that there was something of value on your webpage. People took the opportunity to game the AI and manipulate rankings, coming together to form link exchange groups. Website owners would reciprocally link to each other with the intention of boosting each other’s rankings; it’s like the original #likeforlike and #followforfollow.

Another method was to post comments on someone’s blog with a link back to your own website. Google’s early AI saw this as a legitimate backlink. So as you can imagine, blog posts were spammed with hundreds of comments linking to all kinds of websites. This still happens today as there is a contingent of people who believe that comment links still hold considerable value in Google’s algo. I have a friend who runs a blog. He’s always so happy when he gets comments. I don’t have the heart to tell him what’s really going on.

Over time Google realized what was up. It’s AI became more sophisticated with Panda and Penguin updates, so the SEO tactics of old didn’t work anymore. Attempts to manipulate rankings that worked so well in the 2000s now result in penalties by Google. In many ways things haven’t changed. AI has improved but the willingness of people to game the system will always remain. The ‘black hat’ SEO community is still strong. If there’s an opportunity to rank higher on Google through some hack in the algorithm, you can bet people will take it. That’s why it’s not a question of whether AI and the internet match. The internet depends on AI. Without the ever-improving capabilities of AI, the internet would just become another technology that is a relic of the past.

carpent0r
Potential
April 14, 2022 8:33 am

From a public-facing perspective, AI has been a big fail for internet technology companies (in my humble opinion). Let me explain my thinking. A lot of unicorns and exciting new businesses big up their proprietary technology that uses machine learning / artificial intelligence to provide a better user experience.

you talked about social media, so let’s start there. How often does social media get it wrong in their recommendations? YouTube, when I’ve told you countless times that I’m not interested in certain content, stop showing me that content dammit! Your algo should be taking notice of my preferences, but you can’t get it right even when I slap you on the face with it. It’s like you’re trying to shove it down my throat, hoping that I’ll eventually come round.

A couple of weeks ago my account was restricted on LinkedIn. I had to upload my passport to regain access. What exactly happened with your AI that flagged me as a fake user after being on your platform for over 10 years? I know another guy who had the same problem but when he uploaded his passport, which was legit by the way, he’s now permanently banned. Not quite sure why the LinkedIn AI thought he was sus.

This isn’t limited to social by any stretch. Let’s move on to dating apps. Why does every dating app claim to have some proprietary tech that is better able to hone in on my preferences? It’s the usual spiel of having a greater chance of finding a connection with the app because it uses AI to show profiles to people who are more likely to match. Some profiles have options to set your preferences. I’m giving you my preferences on a plate, Hinge! So stop showing me people that I’m unlikely to match with. These companies are worth a lot of money on the premise that they have tech that facilitates matches. But their tech sucks! Their algo gets it wrong the majority of the time.

And what about automated chat bots? I have never, ever had a positive experience with an automated chat bot. They never resolve my issues. When I’m forced to speak to one, I find myself counting the seconds until I have the option to speak to an actual person. The worrying thing about automated bots is that the answers they present are so straightforward. You can find all the info you need on the help/support sections of their website. The fact that companies are relying more on automated customer service reflects a need to relieve workload of customer service agents, but also that a lot of queries coming in are simple, easily answered queries that requires a basic look-up on the company’s website.

All of these are pretty big fails and there’s a lot more where that came from. Bear in mind these are public-facing examples of AI. The reason why they are fails in my estimation is because it is that very technology that these companies highlight as their unique selling point. If your USP is technology that sucks eg. a faulty recommendation algorithm or a machine-learning based account suspension system, then of course I’ll see your AI as a fail.

eBay automated assistant.PNG
Irvin Blake
Influence
June 30, 2022 11:15 am

The internet as we know it falls apart without AI. From a content creation perspective, AI is a minefield to deal with. As an amateur video editor, I’ve been hired a few times to edit videos for various YouTube channels. It’s always interesting to learn about their interpretation of how to play nice with the algorithm. I’m not really a content creator but I’m very interested in the process. Either you need a clickbait title or the video must be a certain length. Use the right hashtags and make sure you include subtitles. Theories abound.

The biggest annoyance for me is the contradiction between a major platform’s advice and the channels and articles its AI pushes. Let’s take Google as an example. SEO is as old as the internet. Every year the death knell is sounded: SEO is dead! Yet every year the SEO services industry grows. Google has always said that getting high rankings depends on quality content. In Google’s own SEO starter guide, I quote:

Creating compelling and useful content will likely influence your website more than any of the other factors.

If this is the case, why is Black Hat SEO still a thing? Why do people try to game the algorithm? The truth is, to an extent, gaming the algorithm works. As much as Google tries to convince you otherwise, some publishing businesses flourish on gaming Google’s algo. That’s the annoying part. Google says they want compelling and useful content but then they push some of the most worthless content to the top.

I read an article about Michael Phelps on Essentially Sports, with the clickbait title “We Created a Monster”. It’s an utterly useless article with no substance. So why does Google push it? I hadn’t even heard of Essentially Sports but Google thought it wise to place it on my Google News Feed. The article doesn’t provide any compelling and useful content. But rather than believing me, here are some of the comments:

  • This was the most pointless article I’ve seen in quite a long time.
  • Hot garbage of an article.
  • People get paid for this low effort clickbait-y nonsense?
  • This sort of sorry excuse for journalism is all over the internet.
  • This is terrible writing.
  • Never have seen so many words that say absolutely nothing when compiled together.

Do I need to continue? It’s a trash, no-effort article, yet Google values it highly enough to push it onto the news feed. Therefore we can conclude that there are benefits to gaming the algorithm. Essentially Sports will be earning high ad revenue from it. I guess Google’s understanding of “compelling and useful content” is very different to the rest of us.

Clickbait and Google