Lecture Slides
About the speaker

Meet the Speaker
Lazarina Stoy.
Lazarina Stoy a Marketing Consultant, Trainer, and Speaker, specializing in SEO, machine learning, and data science. Lazarina is also the Founder of MLforSEO – a machine learning training platform for organic search marketers, and Founder of the Women in Marketing – Bulgaria community.
Lecture Transcript
Sponsor Introduction (Scholarship Sponsors)
You might have seen that we frequently advertise scholarship tickets for our events, and that included our first networking dinner that we did, and also this event as well. I am completely blown away of the fact that we have three companies, three in my eyes. amazing companies international, super well recognized that recognized our event as a valuable event to purchase tickets for, and for us to allocate those tickets and give the opportunity for women in marketing in Bulgaria to attend for free. I will tell you a little bit about those scholarship sponsors, and then I’ll get started with my 140 slides on Google Cloud.
So first off, Screaming Frog, that was our first ever sponsor, so if you don’t have a Screaming Frog crawler downloaded on your device, or if you don’t have an account please, the least that you can do is go on LinkedIn and say To screaming frog that they’re the best team ever because they are trusting us so much with every event that we do. And they’re just a joy to work with. So, it’s basically a website crawler that helps you improve on site SEO by auditing common SEO issues, but it has a bunch of other capabilities like content auditing, web accessibility auditing, content scraping as well. So, it’s basically an all-in-one tool, kind of like your Swiss knife whenever you’re working in marketing. So definitely check it out.
We also have Majestic that is a SEO backlink checker and link building tool set. They also are very renowned, well known for the series that they’re doing SEO in 2024, 2025. And why I’m mentioning it is actually because I learned through them sponsoring us that they actually allocate the funds that they receive for the sales of the book and for ad revenue and all of that stuff to actually sponsor communities like ours. And they essentially give back everything that they receive from the sales of this book. And they’re also frequently searching for expert contributors for this book. So definitely apply if you are interested in providing your take on what SEO is going to look like in the next year.
And finally, we have Google Search Central also purchasing tickets for our conference, so if you know of John Mueller, he is the person that actually did that for us, so very, very grateful and definitely you’re already using probably Search Console, most of you so yeah.
Speaker Introduction
With that said I want to very quickly say a word about me. Besides my role in this community, I’m actually massive machine learning advocate and I’m not going to use the word AI in this presentation at all, and there is a slide that is going to explain why I don’t call it AI and I call it machine learning.
I actually have been consulting in SEO for several years now, and I’ve been working with enterprise companies. I decided to start implementing machine learning wherever I can for automations and learning a lot more after my master’s on this topic. And now I also have a training academy for learning machine learning for SEO and for marketing in general.
Lecture by Lazarina Stoy.
I will talk to you about how to supercharge marketing with Google cloud and specifically how to use Gemini as an alternative of chat GPT for some of the tasks. I’m not going to say whether Gemini is better than chat GPT or not, because as you have seen, we are sponsored by two of Google’s teams. Of course, Gemini is better, yes, of course but I’ll leave it for you to decide.
My aim with this lesson is very, very simple. I know that I’m not going to teach you all everything that you need to know about artificial intelligence, machine learning, or even everything that you need to know about automation in general. What I want to do is to show you how to get started, because that’s what you need in order to get some dopamine in your head and actually say, okay, actually, this automation stuff is pretty easy, especially when it’s done right. And it gets amazing results that I can actually share with my clients. The second thing is to actually understand just a little bit about the theory of how some of the models work and what is the difference between AI and machine learning, for instance. And the third thing is to give you ideas. This is not a complete exhaustive list of everything that you can do with this technology. Of course, as you all know, if you just open LinkedIn, you’ll see that there is like 15, 000 people giving you advice how to incorporate AI in your workflow. So, I’m going to be the just yet another person, but hopefully it will get your creative juices flowing to see how you can implement these specific APIs to solve some of the problems that you have.
So, starting from the very beginning, what is the difference between AI and machine learning. So artificial intelligence is the design and study of systems that appear very importantly, they appear to demonstrate intelligent behaviour.
So, the key point here is that we’re building complete systems, and you know, they, they’re meant to mimic what we do as humans every day. So, it could be self-driving cars, it could be recommender systems, essentially. Combining multiple models into one and machine learning is a subset of AI, not the whole part, of course. And it’s just an approach of building AI applications where the models are trained to make predictions. as simple as that. They are trying to predict what the correct outcome is for a particular task that they have been trained to do. So, we are not going to be, I’m not going to be teaching you how to build the next Netflix here or how to build the next Tesla.
We’re only going to be showing models that have been trained to make predictions as simple as that. So, you might commonly hear that Things like AI and machine learning being used interchangeably or being used interchangeably with deep learning or data science that are completely either separate fields or overlapping or just a subset and generative AI is here.
Even though on LinkedIn, it might seem like it’s just the whole thing, but it’s just this small dot over there and actually. ChatGPT is that little, super tiny little thing that’s just super blown out of proportion in many cases, or at least in my LinkedIn, I don’t know about you. So, starting with the basics, what you actually need to know in order to implement machine learning in your day to day is consider three aspects.
You need to consider the characteristics of the task, the characteristics of the data, and the characteristics of the solution that you want to implement in order to solve a problem.
So, starting with the task. You can maybe think of machine learning like two things. It’s either supervised, you can have things like regression, classification, or it’s unsupervised, and you might have things like clustering or dimensionality reduction.
What this means, and I’m going to simplify it a little bit more here, is in one case you have labelled data that you can actually validate the results of the model that is making a prediction. And in the other case, you don’t have a way to validate the results. So, you only have the data and you’re asking the model, what does this actually say?
That’s a very, very simplified view. And in practice the field of machine learning actually were looks something like this. And In reality, it could look even like this. So, you have different models that you can use for different tasks, different ways to train the model and all of that stuff. So definitely not an easy beginner friendly field. But the tools and the technologies that you need are readily available for us.
What the most important thing. You don’t need to understand everything to get started. And that’s the biggest takeaway. If you only have one from this session, you don’t need to become an expert in programming in maths or in understanding machine learning as the whole field. You can just pick. One, and just start with that.
We can also have the choice whenever we’re talking about what machine learning model to use to either self-train a model to pre use a pre trained model or to fine tune a model that has been trained by another company.
So, the difference here is that with own training, we need to have a ton of data. We need to have a much greater understanding of how machine learning works, how math works. And all of this stuff behind actually, this technology, when it comes to pre training the machine learning model has already been trained.
So, what we can do is actually read the documentation, read the papers and actually implemented to a suitable data set that is recommended by the people that developed the model, right? So, you already know which one of the two is for a beginner and which one of the two is for an expert. But if you are a step you know, between using a pre trained model and actually self-training, fine tuning is a great option because you can use a model that has been trained on a ton of data from big companies created with the specifications of quality and quality standards, and fine tune it adapted to a small data set that is specialized to the data and the task that you’re working with, which is essentially what fine tuning does.
And because I’m, I’m here to preach to you about Google’s products and Google clouds, APIs, they have. The API that’s called AutoML that actually allows you to fine tune a lot of the models that exist out of the box.
So, for instance, if you’re working in a very customized domain and you’re trying to make a model better, that is one way to do it. So, you don’t need to have a ton of data. You only retrain it with a few, Or a few hundred examples and it essentially improves the model quality significantly.
Another question that you would need to ask yourself is, do you do I have data for training? Do I have data for validation? Do I have data for testing? This is very important to actually. Be able to develop a proper machine learning model. And if you don’t have that, then you already know you’re going to be using either a manual approach or you’re going to be implementing or fine tuning a machine learning model that has been created by someone else.
When it comes to the data, you can think of things like whether it’s textual data that you’re analysing, so essentially natural language processing, analysing the content of your page, for instance, or if it’s numeric, like making predictions on where traffic is going to go and how many Conversions you’re going to have, or image based.
You can also have things like time series data and all of that stuff. So, try to pinpoint exactly what data you’re trying to work with in order to solve the particular challenge that you have. And then when it comes to the solutions. There is a lovely flowchart on when to actually implement AI machine learning and all of that.
So, if it’s a mission critical task and your job depends on it, like making a very accurate forecast for 2025 and if you are being benchmarked on this, don’t go in a fully automated system and you know, brag to your boss that you have done it in 20 minutes because… right? This is a mission critical task and maybe your entire team’s been Performance is depending on it.
If it is something that should remain consistent over time, then you should definitely avoid working with generative AI and not over relying on unsupervised machine learning approaches. If you need to make the results of the output very easy to understand by all of the stakeholders that are going to be approving the work that you have completed, then you’re not going to be using a deep learning model.
So, I’m going to open a bracket here. I don’t know how many of you saw and now I’m going to test you how many of you go into Slack regularly because I posted about this a few days ago in the channel, but don’t raise your hands. Don’t, don’t, don’t tell on yourself. Essentially, Google held a creator event because they completely shadow banned a few independent publishers.
And they held a creator event. And at this creator event, one of the website owners asked them whether they can troubleshoot line by line, how their algorithm works and why their algorithm is banning their websites completely. Or what is the reason why their websites are not appearing as they used to, because Google themselves have told them that Your content is quality, and your content is not the issue.
And of course, when I was hearing her recollection of the event, that’s just not how it works. Because deep learning is a, you know, black box. You give a ton of data. You train the model to make predictions. And in most cases, the model learns. It’s based on the data, but it doesn’t have an input from the human to say, do this first, do this next, do this last.
You know, it’s not an IF-ELSE system. So, of course, they can’t specifically pinpoint what is the factor. They know that there’s a bunch of ranking factors that go into it. They don’t know which specific one is the point of failure. Essentially, that’s how you can think of this specific issue, and that’s where Google fails in this flowchart, right?
And I’m sorry to say this, but that they know this, their, their systems are failing because the results of the system are not easy to relate to the stakeholder being publishers in this case, in which case, deep learning is not really the best way to be ranking websites, right?
Because you can’t explain why one is ranking, and another one is not. Why is Forbes ranking? And your, Friendly neighbourhood blog that has been publishing content, you know, originally for 20 years is not, right?
And then if it’s just okay, the on average. this particular approach is outperforming existing methods, then yes, implement machine learning.
What I mean by this; I love this example because I was actually this intern a few years ago. The intern that you give to write meta descriptions or the intern that you give to write image captions, don’t do that in 2025 almost, right? There are better ways! It’s not like everything is going to go wrong if it’s not the right method description or the right image caption. It’s not like a critical task. So absolutely you can implement automation. You can still have a human in the loop in order to check and validate, but this is something that can be automated.
So, when we talk about what can and can’t be automated and how to choose whether to implement automation or not, you can assess usefulness based on factors like Insights or the complexity of implementing the solution. How accurate the results are going to be, how scalable it is. Do we have enough data to work with this task? Is it sustainable to implement at an agency level in terms of processes in terms of systems and so on? So, look at a bunch of factors that are you saving money, all of that stuff and draw the bottom line and say, yes, this makes sense for us, or it doesn’t make sense for us. But don’t just go it just because you’ve seen it on LinkedIn or something.
And always beware that most machine learning models also have biases as well. So, the bias is when the machine learning model actually favours some things and deprioritizes or excludes other things. So this could be things like computer programming jobs only being shown to men, or it could be things like a facial recognition system not being accurate in Recognizing people with certain skin tones, or even the in generative AI- there’s a bunch of examples, especially on the image side on, you know, certain jobs being linked to certain genders, you know, not going to name any names. But again, there’s some companies that didn’t get this one right. So essentially, whenever you’re training a model from scratch, you can reduce this by using you know, larger data sets.
Some companies are also incorporating synthetic data. Synthetic data allows us to be able to incorporate some of the data in the training, for Where this data doesn’t exist originally so that we can actually change, you know, the, the course of how we want this models to work and the output that we want to have, even if historically we had some inherent biases in the way our society worked.
And when you’re using a pre trained model, because you don’t have control on how this model has been trained. So essentially, if you’re working with ChatGPT, for instance, you can use extreme examples in your prompt. As one study has shown that’s one way to actually combat this. So, whenever you’re writing your prompt, you can say, I specifically do not want to see examples of this, and this being featured in my output.
Or you can just very importantly be aware whenever you are doing the analysis of the results that you get from the machine learning model, you need to be aware of the biases. And one way that you can get this awareness is by reading the documentation of the model itself, because most of the researchers are actually listing all of the biases that they have identified with the training, and they can help you actually become better at using the model itself.
Okay. Now we’re getting to the meat, I promise. I had to include this section on theoretical machine learning, and now we’re up to the practical stuff. So as with all of the presentations today, we have split for SEO, for social media and different parts of marketing. So, I’ll start with SEO. You can do a text classification, classification from the example that we had a few minutes ago is a supervised machine learning problem, and it tries to sort data that you already have into pre-existing labels. So, for instance, you have content on your website and you’re trying to label it, whether it’s news or what topic it is on and so on, but you already have those labels, those categories. So, it’s essentially sorting it into buckets. it’s a supervised approach; it sorts the data and applications could include doing your own content audit. It could include doing competitor audit, but it can include a bunch of other things. Because as you can see with this API by Google Cloud you can classify documents in more than 1300 predefined categories and the process shouldn’t take you more than 20 minutes to execute. You create an API key; you identify the content that you want to scrape.
You download Screaming Frog, one of our scholarship sponsors, and just scrape the content with that because it’s a no code alternative and it’s quite easy to execute with it. You enter your content; you enter your URLs so that you can have some sort of identifier in the template. All of the templates and code will be linked after, but this is a completely no code approach, right?
So, you run the script via a formula, and you get a classification label, and you get Confidence. So, if you look at the classification label that you, that you get from the script, you of course, you can plug and play in Looker Studio, but you’ll get that at the end. If you look at that, you’ll see that it’s maybe things like whether it’s news, or does it talk about arts and entertainment, or is it related to sports?
But then you also have secondary and tertiary categories. So, if you click on one, then You might see that art and entertainment actually breaks down to events or visual arts or performance arts or competitive contemporary art and then further down into whether it mentions museums or galleries or whatever.
So, this is extremely, extremely granular way to look at your content. So where can you actually implement this in your SEO strategy? You can, if you’re working with a website that hasn’t really updated their content categorization system, if you have a ton of content writers and you don’t have a coordinated strategy in tagging the content itself.
You This can be a very useful way in understanding that vast library that you’re working with and, you know, a first step in organizing that content. And this can be a great way to also plug into Looker Studio and seeing whether certain categories that you talk about on your website may be related to certain authors or just the topics themselves are not really performing as well.
Either you’re not getting clicks or you’re not getting good user engagement and so on. One other thing that you can pair this approach with very nicely is actually doing entity analysis. So, you already have the topic, let’s say hiking, for instance or let’s say, for example, you have news, you would want to know what kind of news they are.
So, the text classification will give you indication that it’s news. Related to politics, let’s say, but with the entity analysis, you can actually identify things like a specific location that the political event is happening in or maybe people that are mentioned in this news piece and so on. So, it’s a very granular way of looking at your content without actually reading it and understanding.
What it’s all about. And again, process is not going to take you more than 20 minutes. You run the script, you get the entities from content that you already have scraped and entity data that you get is what entity it is, what is the type of entity and also what is the prominence of the entity or the importance for the document that you have analysed. And you also get entity sentiment.
So, think about it. Maybe A certain politician not naming names because I don’t want to go into it, but a certain politician is always mentioned in a positive light whenever a certain event is being talked about like politically, or maybe it’s always mentioned with a very strong negative emotion. That’s the kind of data that you might get. What is the emotion, positive or negative? How strong is it?
And you also get metadata as well. The metadata could be things like, does it have a Wikipedia page? Does it have a knowledge graph entry? So basically, how important is this entity in the grand scheme of things, right?
And you also get certain mentions. So, if you’re talking about Barack Obama, for instance, is he mentioned like just Obama or Barack or the 41st president of the U S and so on. So basically, every single way that the author or if you’re doing user data analysis, the user has actually referred to this entity.
So, it’s a very, very advanced model and you have a ton of data points that you can apply this model in and a ton of different data that you are going to get for user analysis. This is a goldmine, and does it matter how you do it? Like why not just use chat GPT? Well, Because ChatGPT makes shit up, it’s not accurate, and it’s just not a robust approach, especially when you’re working at scale.
It doesn’t really give you all the data points that the Cloud Natural Language API gives you, and it just creates entity types that don’t really exist. And you know what, for models that Google uses in their systems, and they have released as APIs, if they have an alternative, you might get it. Use that alternative, right? Because we all essentially are at the mercy, especially in SEO for the ranking algorithms that they use and that they create. So, there is a comparison that goes into depth into the reasons of why one is better than the other, but I’ll leave it. For you to check out after the talk.
And for social media for local SEO for video as well You can do things like autocomplete APIs, so they allow you to actually use YouTube auto complete Which is very easy way for you to scrape ideas from directly from what google considers the next step in the user journey as they’re typing The YouTube search bar.
You can also do the same for google maps both for keywords that people enter in Google Maps and what data Google Maps serves. And you can actually play with this to enter different locations of where the user might be. And this is great for local SEO. And you can also do the same as you can see for physical landmarks and locations.
So essentially seeing, okay, if you are recommending something related to Eiffel Tower and the user types that in the search bar of Google Maps, what other things are they getting based on this query. You can also do things like content moderation. Again, same API. I’m not going to go through all five implementations, but it’s just the same API.
And that model actually automatically analyses if the content is inappropriate or if it represents, you know, clean or professional data. And this is actually something that Google identifies as important. as a topic. That’s your money or your life, which is very important for SEO . And if you look at their own quality search reviewer guidelines, they actually talk that this, your money or life topic could be things like happiness, Health, financial stability, or safety.
And then if you look at the API that they have publicly released, that of course they use in their systems, it’s exactly broken down into these four topics, but it’s so granular that you can see things, whether a certain content is talking about politics or finance or legal matters, or whether it’s related to war and conflict or religion, whether it’s toxic or an insult or whatever.
So here again, there is no code template, you can get that on ML4SEO, you plug in your content, and the API essentially gets you the result. So if you see things in red, you automatically know, okay, can we link this kind of data with Google Analytics, with traffic data, and actually see, okay, maybe this article is not performing well, not because we didn’t target it well, not because we didn’t do proper research, not because of anything else, but just because the author that wrote about it actually inserted a political opinion in there. And that’s pissing people off, for instance, you know, and that’s a very quick and easy way to ensure that your branded content actually remains safe, you know, brand safe. And I have, I have a few examples that I scraped SEO Reddit for this. And as you can see, of course, two posts from there have been completely flagged as. Toxic as hell, but are we surprised, right?
And again, possible data points, a ton of stuff that you can use this in, including user research. And finally, on this section, we have sentiment analysis. You can use an instant data scraper to scrape Google reviews from Google, my business profiles. You can do the same for Amazon. You can do the same for wherever your reviews for products live. And then again, No code template, 20 minutes. You can absolutely do this. You get the review analysis, and you get things like, what does the sentiment score overall? What is the sentiment magnitude or the strength of the emotion? And you can actually customize the tags to be as specific as you want.
And you can visualize this data to say, okay, users are actually. positive, they’re negative, and this is an analysis that you can do very quickly if you’re trying to improve a brand image, for instance. But you can also do the same if you’re analysing the sentiment of third party publisher articles that are writing about your brand so you can actually monitor, you know, backlink mentions and see which third party publisher is speaking about your brand in a positive way, which of them is speaking in a negative way and so on, so that you can actually build better brand partnerships, or you can actually reach out to publishers that are for some reason or another unhappy with your product brand policy.
Service or whatever else. So, it’s a great way to work with you know, your publisher partners as well So when it comes to content transformation, that is the future I don’t need to tell you that you don’t have to only have one blog post turn that into multiple different assets. So, if you’re starting from the blog post turn it into a video turn it into an audio turn it into a social post, and if you have a video already Definitely repurpose it into a text format audio short post and so on You can use a bunch of APIs.
I’m going to quickly run through them. You have the speech to text API that is available from Google cloud. And they actually are competing against Amazon transcribe and a bunch of other giants for these types of APIs. But if you choose Google Cloud, or if you choose Amazon, you’re essentially going to have better capabilities than some of the other tools out there, like OpenAI, for instance, and that has been heavily researched.
Of course, you can go into a no code tool as well, but just beware, you will be charged more for the same task, because most of these no code tools are actually wrappers of data. The models of bigger services. So just be aware of that. And my caveat here is I’m not saying spam your blog with auto transcribed content from YouTube, right? I’m not saying you know, traffic to the moon and fire your content team or whatever. I am saying that there is. Different teams that sometimes work in silos, and you can bridge those gaps by actually having these conversations and maybe creating the first draft based on a video that a video team has been producing in tangent to your content strategy.
And this is a very easy conversation to have. Hey, we can actually get X amount of traffic. If we just spend two hours to edit this article and we can publish it and we can actually have the equivalent. We can like the video, blah, blah, blah. So happy collaboration all around, right? That’s what we want.
And we can actually make our content work harder, especially in this day and age, and especially if we’re producing webinars, conferences, all of that stuff, right?
And it’s very important to also mix and match with other approaches and here I had a sample process to actually show you how you might identify, you know, relevant videos and then you can go into entity analysis into text classification and so on and actually combine some of the approaches that we’ve talked about today to create, you know, a better overall project and you can do the same, of course, going from text and turn it into speech again here.
Of course, you have the model and it’s quite easy to use. You have no code approaches. Again, the bigger Amazon and Google Cloud models are much better. But importantly this doesn’t mean I’m saying spam YouTube with AI generated trash, right? I don’t need to say that. Or it doesn’t mean that you can replace video production.
What I am saying is that certain content formats don’t require as much video and audio, and that Video and audio can be made from stills as well.
And example, case in point, this tutorial content, right, you can make that very easily into video without actually obstructing the view, and you can actually work with stills in order to make that interactive. And with that, you don’t actually need to have a human voice because it will be more clear if it’s something entirely reading and based off a script.
You still have to have a personalized look and feel. You can’t fully replace video production in this day and age, but there are some tools like Synthesia, for example, that lets your kind of at least bridge the gap. If you are going into automated video production, at least bridge the gap with the human aspect and have a human developed you know, still synthetic avatar, but still something that is based on actual, you know, People as opposed to just the automated stuff.
And you can actually work with text-to-text transformation. So, you can do no code programmatic and so on. You can turn blog post into social media posts very instantly. You can actually use large language models to rewrite your newsletters. You can do the same for extracting key insights from PDFs. If you have some case studies laying around and so on. And of course, all of this would require some human editing, at the end.
You can use generative AI with structured data. So, if you have, for instance, a product database, and you’re trying to optimize product descriptions, that’s. Great way to actually combine this data with large language models and actually create something that’s scalable and you can do this for individual text summaries, or you can also do this for user reviews.
For instance, all of our users have mentioned about this product that when it relates to the look and feel it’s a positive experience or whatever else you can absolutely do this by combining a large language model with your user reviews database. And of course, the code is linked at the end. Any large language model will do a great task.
So, with that, I’m going to quickly run through things that you can do with Gemini. Of course, you can do all of them with ChatGPT as well. But Gemini does work better in certain aspects. So, it does have existing integrations with Google’s web apps. Caveat. In my personal opinion, sorry Google, you’re giving access to all of your work data and all of your work files if you do enable that.
So, bear in mind that they’re using this data to further train their models, right? And we all know how great their safety is when it relates to AI and machine learning. So, bear that in mind. And there’s also, at least they provide a different draft, which makes it a little bit more interactive and intuitive than chat GPT, especially when you’re trying to fine tune some things and they have a multi modal chatbot, which is again, the same as chat GPT.
Where it excels in comparison to ChatGPT is things like speed, precision, and the knowledge vault. Otherwise, how much data does Google have to train a model and how much data does OpenAI have, right? Obviously, the answer is quite self-explanatory. Google wins. And also, for image analysis as well Google does a better job.
Where it fails, like I said, they don’t protect your privacy. They don’t care at all. You’re going to be uploading documents. You’re going to be syncing your entire, whatever, however many gigabytes of data you have. They’re going to just feed that to train their future models. But actually, every company is probably doing that at this point through a direct link like Google or either through selling your data, like, for instance, WordPress, selling your data to another company to train their models. So, unfortunately, that’s the world we live in.
So, you can create titles from reading a document uploaded in your Google drive- great for blog posts. You can create images or graphs, captions, data analysis or commentary. So, for instance, I asked it to create a caption for our conference banner and it did a great job, but you can also do the same for analysing graphs and doing very quick data analysis. If you don’t have a team doing that, you can extract insights, summarize pdfs again, kind of the same you know use case that we demoed with ChatGPT. It does a great job with that as well but of course it takes that pdf. So definitely don’t upload sensitive documents to that and you can create ad copy for landing pages. For instance, if you just download the landing page as a pdf and you can do the same for your competitors landing pages as well in order to actually compare How different competitors have built their landing pages You It will create a brief for you to create the most comprehensive, you know, competing page to actually give as a detailed recommendations to your developers or whoever else will be building that for your company, you can do social posts, or you can ask it to do short ideas because it has a native integration with YouTube as well.
So, if you connect Gemini with the YouTube extension, it can automatically analyse videos and it can give you ideas. Either for scripts that you can write in short or videos that you can do or how your competitors’ videos are performing, which is great. There’s a lot more tutorials that I have linked both on how to get started with ideas and also how to get started with Gemini if you’re working as a programmer.
So where to go from here. All of the tutorials and all of the templates will be linked on ML4SEO and in the slides as well whenever there isn’t an equivalent for that. And there is an academy as well. Right now, we only have courses, but the academy will launch in 2025, hopefully before I give birth.
So, good luck to me. But essentially the academy will have weekly videos for you to actually implement machine learning in your marketing work. If you need to find me, forget the slide. You can message me on slack. Okay, please use the slack people. And I just want to thank you all for listening.