- TheDownload AI
- Posts
- Beyond The Headlines: What The Latest AI News Actually Means For You
Beyond The Headlines: What The Latest AI News Actually Means For You
Most AI news is full of fluff and wind, which is also a great name for a middle-class cafe. Here at The Download we limit our news stories to those that matter and distill them into something you can not only read but also understand and then actually explain to someone else.
Because what’s the point in reading the news if you can’t then use it to impress your boss?
Meta AI, coming to an everything near you…
TL;DR
Meta’s gone full bore on AI and is incorporating it’s AI assistants across platforms like Instagram, Facebook and WhatsApp. This will not only make it even easier to send dank memes but also signals Meta’s push to operationalise AI across it’s platforms and prove the technology’s business value.
Right, What’s Happened?
Meta AI assistant was introduced last September and is now being integrated across the Instagram, Facebook and WhatsApp search bars. It’ll also appear directly in the main facebook feed, presumably because that’s the Zucks’ favourite first child…
All jokes aside, it’s because Facebook not only provides a ready-made ecosystem for Meta to test functionality and gen AI use cases but also because it’s the most obvious use case. If you’re having trouble picturing it then just imagine the Gen AI feature on your LinkedIn feed.
However, that being said, the hope is that Meta’s integration of their AI assistant provides a little more value than LinkedIn’s currently does. Meta are certainly taking steps to ensure it does, and have announced Llama 3, the next iteration of their foundational open source model, which they claim outperform its peers across the key benchmarks. It’s certainly the only one that currently integrates search across both Google and Bing.
So the signs are promising that Meta aren’t just splashing their AI assistant across their platforms in the hope that something will stick.
Great, What Does This Actually Mean?
What this means in practice is that Chat GPT is basically coming to your Facebook search bar.
However, there are wider implications for these events. We stated in our previous newsletter that 2024 will be a key year to not only prove that Gen AI can be operationalised from a business value perspective, but also that it’s economically viable and can be linked to real return on investment (RoI).
Meta holds a serious competitive advantage over their competitors in the Gen AI space in that they already have access to over 3 billion users across all of their platforms, and this is undoubtedly their opening salvo to become a player in this space.
So What Happens Next?
Zucks himself has stated that “at this point, our goal is not to compete with the open source models… It’s to compete with everything out there and be the leading AI in the world” and that this will be “the most intelligent AI assistant that people can use freely across the world”.
I can’t be the only one getting “Ready Player One” vibes every time this man opens his mouth now can I?
That aside though, Meta has garnered something of a reputation for copying key features from their competitors cough stories Cough reels. However, Meta hasn’t shown any remorse for this behaviour and are almost certainly going to continue to leverage their access to such a broad user base as a means of iterating through the potential use cases for any Gen AI features.
It’s worth noting that they’re not alone in this. We explored Google’s preliminary defensive moves to protect search from Chat GPT functionality in our last newsletter.
Google bets the house on AI
TL;DR
Google’s chief bean counter, Ruth Porat, has announced redundancies as part of a global restructuring of the company’s finance arm to enable further AI investment. The giant of search is streamlining the business’ enabling functions as a means of positioning themselves as not only an “AI ready” company, but also as the leading player in this space.
Right, What’s Happened?
Google has announced redundancies as part of a restructuring of their internal finance arm and that they’ll be forming finance hubs in Bangalore, Mexico City, Dublin, Chicago and Atlanta. They argue that “This strategy will help us be a more efficient organisation and enables us to run 24 hours a day”.
Great, What Does This Actually Mean?
Google is streamlining the enabling functions of the business so that they can operate a decentralised service 24 hours a day.
So What Happens Next?
Google’s spokesperson stated that “The tech sector is in the midst of a tremendous platform shift with AI”.
I mean, nobody at the Download is going to argue with that statement. In fact, I’d go even further and argue that this is by no means the first and certainly won’t be the last time that we see redundancies across traditionally “back of house” operations to liberate financial bandwidth to deploy towards AI.
Indeed, just as broader industry is taking a good long hard look at it’s data and shaping themselves to be “AI ready” and “AI first”, those who are actually in the AI game are optimising their operations for AI.
Furthermore, other companies are almost certain to follow Google’s lead in this, which unfortunately means more redundancies.
Just as industry is shaping itself to be “AI ready” and “AI first”, those who are actually in the AI game are shaping their businesses to optimise for AI.
Other companies are almost certain to follow Google’s lead in this and that unfortunately means more redundancies.
AMD Enters The Chip Wars…
TL;DR
Desktops are cool again. Or at least, Advanced Micro Devices ( AMD) are hoping to God that they will be soon as they’ve announced a new line of semiconductors which will enable the consumer market’s supposed yearning for tech that can handle complex language models and applications directly on the device. Not only does this mean that you can remove that needle chock full of cloud from your vein but also that companies are getting pretty desperate to stake their claim to their slice of the AI pie.
Right, What’s Happened?
You’re probably wondering who’s still buying PCs in this day and age, but the market did see an uptick with the shift to remote working as people sought to convert their spare bedrooms into true home offices as opposed to just “some room where I open and close a laptop”. In fact, Intel said in January it expects to "ship approximately 40 million AI PCs in 2024 alone".
AMD have unveiled a new line of semiconductors for AI enabled laptops and PCs, which will be available in HP and Lenovo laptops and home computers.
Great, What Does This Actually Mean?
The potential impacts of AI on the personal PC market hasn’t really been discussed or investigated in such a public manner as Gen AI. It makes complete sense though. As smaller models become more accessible folks will want to customise and test, so a PC powerful enough to run the necessary models makes complete sense.
Or at least, AMD clearly think so, and have certainly chosen to target a section of the AI consumer market that you don’t really hear much about.
This is quite a strategic bet that they’ve made and they clearly feel that the consumer market can recover and has potentially been overlooked enough to make their investment worthwhile.
So What Happens Next?
I know that we say this all the time, but the AI space really is moving so fast, and the potential means that those companies with resources are now deploying them to cement their claim to a share of this lucrative market.
The PC market has been relatively overlooked until now, so expect other manufacturers to align strategically with chipmakers as this year progresses.
My editor keeps telling me that you loved the first article in our series on the building blocks of AI. He’s probably lying, but he means well.
Either way, here’s the second (and I’m told ‘long awaited’) article in our foundational knowledge of AI series, and it isn’t one to miss.
To put it bluntly, if you don’t understand machine learning then you don’t understand AI. It’s like saying you’re a teacher who can’t read, the two sort of go hand in hand…
The Building Blocks Of AI - Issue 2
Rise Of The Machines
Machine learning - why all the fuss?
Private industry has jumped on the potential opportunities enabled by machine learning quicker than a tramp on chips, and it’s easy to see why. It’s enabling them to uncover hidden patterns, gain valuable insights and make data-driven decisions with unprecedented accuracy and efficiency.
“Machine learning” (ML) is everywhere these days, both literally and figuratively. It’s behind the personalised recommendations on your Spotify and your home assistant, but also all over social media and the poster-child of many overnight AI gurus. However, despite the barrage of posts, many still don’t really understand the inner workings of machine learning, even to a basic level. Oh, they can tell you “yes, it’s how the model learns don’t you know?”, but you’d get a poisonous stare if you dared to ask them how or why.
We covered the basics of ML in our previous article on the foundational principles of AI, but we’re going to take it a few steps further in this one and put you in a position where you can provide value the next time your boss brings the topic up.
The boring stuff
As we covered in our previous article, ML is basically the process by which machines learn how to make predictions or decisions without the explicit instructions for every task that accompany traditional programming. Through ML, the algorithm learns from examples and adjusts accordingly.
Broadly speaking, humans need two things to improve their cognition, access to knowledge and the capacity to consume it. The equivalent of these for machines is data and computing power, both of which have undergone huge advancements in recent years. Our ability to collate, store, sort and exploit big data alongside hardware improvements means that we’ve basically built the real-life equivalent of Sheldon Cooper on cognitive steroids.
This incredible ability to learn from data lies at the heart of ML’s power and versatility, and the data can come in various forms. Models are trained on everything from text and images to video and the data often contains patterns or relationships that the algorithm seeks to uncover. You might be trying to analyse customer behaviour, identify spam emails or predict stock prices (hit us up if you figure out that last one). No matter the purpose, the ability to process and interpret data lies at the core of Machine Learning's capabilities, and it’s emerged as a powerful tool for extracting insights and making predictions from vast and complex datasets.
The core concepts of machine learning
Algorithms are a really key concept when it comes to understanding ML and we’re going to explore them at a more detailed level than we did in our previous article. However, we’re kind (not to mention wise) souls here at the Download and have outlined the basic concept below for anyone who missed it:
“An algorithm is simply a step-by-step procedure or set of rules designed to solve a specific problem or perform a particular task. In the context of AI and Machine Learning, algorithms are mathematical models that process data and make decisions or predictions based on that data.”
Think of algorithms as the recipes that guide the learning process, outlining the steps the system takes to analyse data, identify patterns, and make decisions.
If the algorithm is the recipe then you still need ingredients, so what are they in the context of AI?
You guessed it. Data.
Or, more specifically, “training data”, which serves as the foundation upon which Machine Learning algorithms are built. Training data consists of examples or instances that the algorithm uses to learn and improve its performance. These examples are typically labelled with the correct answers, allowing the algorithm to learn from its mistakes and adjust its predictions accordingly. The quality and quantity of training data play a crucial role in the performance of the Machine Learning model, as it determines the system's ability to generalise and make accurate predictions on new and unseen data.
How do you know the training worked?
How many times have you heard an athlete say “I don’t know what happened. My training went really well but something just felt off today” after an atrocious performance? You sit there howling at the TV and throw the remote across the room because that idiot on the screen has just lost you money after you promised you partner the bet was a sure thing.
Just me? Oh, well…
The truth is that it doesn’t matter how much you train, the real proof is in the test, and algorithms are no different.
So step aside training data, because you now have a trained algorithm. Enter “test data”, which is what’s used to assess the model. This evaluation process is a means of assessing the algorithm's performance and accuracy based on predefined metrics or criteria.
But what metrics do you test against? To refer back to our athlete analogy, there’s no point in assessing your Olympic weightlifters against their 100 metre times, and you need to understand what you’re looking for when testing your model.
Common evaluation metrics
We’ve outlined some extremely basic evaluation metrics below. These are just a starting point for awareness as we’ll cover evaluating a model in further depth in a future article.
Accuracy and error - this one’s pretty simple and intuitive. “Accuracy” simply refers to the number of correct predictions or classifications and “Error” simply refers to the number of incorrect ones.
Precision - Simply measured as the number of “relevant” results returned. The term “relevant” is, of course, subjective though.
Recall - If “precision” above quantifies quality then “recall” quantifies quantity. Namely, the proportion of available relevant results that are identified.
F1 score - There’s a dichotomy between precision and recall, in that the more you have of one the less you’re likely to have of the other. They require fine tuning to achieve the right balance. If you combine the two metrics then you arrive at F1. F1 score allows you to obtain the model’s performance when you have imbalanced data, as so often happens.
All of the above provide insights into the algorithm's ability to make correct predictions and avoid errors. However, and I really can’t stress this enough, this is a huge topic and one that could fill an entire article series in itself (and perhaps will if you folks are interested). The above will enable a basic understanding though and allow you to follow along in educated conversation.
Those damn time thieves!
You know that we’re a big fan of real-life examples at The Download, and there’s no more relatable example for what we’ve discussed thus far than the dual dopamine drips that are Netflix and Youtube. Those damn time-thieves have stolen more hours off you than your boss ever did, and it’s all thanks to their recommendation system, which does a perfect job of keeping you hooked. So how do platforms like those employ ML algorithms to analyse user data and provide personalised recommendations tailored to each user’s preference and interests?
“Recommendation systems” is an umbrella term used to describe numerous ML techniques that all seek to ensure that content platforms are always one step ahead and can pick the perfect next watch, and “collaborative filtering” is one such ML technique. Simply put, it analyses user behaviour and preferences to make personalised recommendations. At its core, collaborative filtering works by identifying similarities between users or items based on their past interactions and preferences. For example, if two users have similar viewing histories or have both liked similar movies or songs in the past, the system may recommend content that one user has enjoyed to the other. I’ve definitely caught the attention of the marketers in the room…
In addition to collaborative filtering, recommendation systems may also incorporate “content-based filtering”, which analyses the attributes or characteristics of items to make recommendations. For example, if a user has previously enjoyed action movies, the system may recommend other action movies based on similar attributes such as genre, actors, or director.
This isn’t an either / or scenario though, and recommendation systems often employ hybrid approaches that combine collaborative and content-based methods to provide more accurate and personalised recommendations. By leveraging multiple data sources and algorithms, these systems can tailor recommendations to each user's individual preferences and interests, enhancing the overall user experience and engagement.
What else can ML do?
Being able to describe to your boss how Netflix knew that they’d click on a particular show is all well and good, but it’s a little narrow (and dare we say ‘casual’?) as an example. Fortunately, ML has a broad range of applications across industries…
Image Recognition - ML algorithms can analyse and interpret visual data to recognize objects, faces, and patterns in images, and one of the key use-cases is autonomous vehicles.
Natural Language Processing (NLP) - ChatGPT anyone? That’s right, NLP (and transformer architecture, but that’s a story for another day) is behind the terrifying capabilities first demonstrated in November 2022. NLP enables machines to understand, interpret, and generate human language in a way that is meaningful and contextually relevant.
Predictive Analytics - Possibly the most applicable use-case for industry as it’s often used in areas such as financial forecasting, customer churn prediction and inventory management. ML algorithms analyse historical data to identify patterns and trends and make more accurate predictions about future events or outcomes.
The (conversational) power of ML
It’s never fun when you’re caught out by a colleague or client on a topic you haven’t done your homework on. Fortunately, that’s no longer a concern for you where ML is concerned.
This article is never going to land you an ML engineer job at Palantir, but that’s not what it’s designed for. You now actually understand and can contribute to discussions on the practical applications of ML in industry, which is emerging as a key skill in the current employment market. Being conversant in the foundational technologies of AI will differentiate you from your peers and is the quickest way to boost your credibility in industry.
You can now hold your own the next time this thorny topic is raised in a meeting and even provide some relevant industry (and personal) examples. All of which we’ll build on in our next article, which will explore neural networks in depth and further complement your burgeoning AI toolkit.
3 Hand-picked AI tools every week that allow you to get ahead in your job and beat the competition. These tools will not only save you loads of time but also improve the quality of your work and help you get noticed.
When was the last time you ever heard anyone say “”wow, that documentation was really interesting!”.
You haven’t and you never will! Use Guidde to turn that boring documentation into visual guides and document workflows for new hires and to share with team members.
Imagine being able to edit your video content through conversation! Well now you can! Simply upload the file to Descript and begin editing the video through text based interaction.
I have a journalist friend who swears by this tool. The most time-consuming part of her job is transcribing all the interviews that she does, and this tool literally saves her hours every week.
Reply