Tag

machine learning

Browsing

By Alessio Francesco Fedeli

The current landscape of digital technology is marked by the struggle to achieve visibility for your business online and target the appropriate audience amidst a wave of competition. Search engine marketing (SEM) has pivotal strategies that will allow a business to achieve this but with ongoing advancements in artificial intelligence (AI) and machine learning, more marketers have opportunities for maximum growth. These advancements are revolutionising SEM and will help enhance the efficiency and effectiveness of business campaigns significantly.

AI-enhanced SEM tools stand at the vanguard of this revolution, utilizing advanced algorithms and machine learning capabilities to transform every facet of search engine marketing comprehensively. From automating the process of keyword research to refining advertisement creation, and from optimising bid management to improving performance analysis, these tools furnish marketers with the capacity to attain exceptional outcomes. They transcend conventional tool functionality; they act as catalysts for change, facilitating precise targeting and real-time modifications previously considered unattainable.

Exploring further into AI and machine learning within SEM reveals that these technologies are not only augmenting existing methodologies but also fostering novel strategies. Marketers harnessing these tools gain the ability to predict market trends accurately, comprehend consumer behaviour with enhanced precision, and implement campaigns that are both cost-efficient and high-impact. The advent of AI-driven SEM marks a transformative era in digital advertising, reshaping the landscape in ways that are beginning to unfold.

Leveraging AI and machine learning in SEM

Leveraging AI and machine learning can revolutionise your campaigns | News by Thaiger
Photo by Steve Johnson on Unsplash

The Role of AI in search engine marketing

AI revolutionises SEM by making complex tasks simple. It sifts through vast datasets to unearth insights beyond human capability. By fine-tuning keyword research and bid optimisation, AI ensures ads hit the mark every time. It doesn’t stop there; AI tailors ad content for individual users, predicting trends and making swift, informed decisions. This not only sharpens the marketer’s toolbox but also enhances the consumer’s journey, significantly boosting conversion rates. With AI in SEM, ads become more than just noise; they’re strategic moves in the digital marketplace.

Benefits of Using Machine Learning in SEM

Although there is some apprehension from some, it is important to understand that there are benefits to incorporating machine learning into your SEM strategy.

Benefits of machine learning in SEM

BENEFIT DESCRIPTION
Enhanced targeting accuracy By analysing user data, machine learning identifies the most relevant audience segments, improving the precision of targeting efforts.
Optimised bid adjustments Machine learning algorithms navigate the volatile bidding landscape, making real-time adjustments to maximize ROI.
Improved ad performance It analyses what works best for ad performance, from copy to design, ensuring optimal engagement and conversion rates.
Fraud detection and protection Machine learning acts as a guardian against click fraud, safeguarding advertising budgets from dishonest practices by spotting and mitigating fraudulent activities.

This integration offers strategic advantages that will enable marketers to be more effective in this competitive digital landscape. However, by implementing machine learning, businesses can not only optimise their advertising efforts but also protect their investments. This way, every dollar spent is an investment towards achieving tangible results.

Incorporating AI and machine learning technologies in SEM campaigns

Choosing the right AI tools is the first step to SEM success. The ideal tool offers a comprehensive suite for managing keywords, bids, ads, and performance, fitting seamlessly into your marketing stack. On the machine learning front, clarity in objectives paves the way for impactful integration. Whether aiming for higher CTRs or lower CPA, leveraging historical data and machine learning algorithms to predict and adjust is key. Constant experimentation and analysis refine strategies, ensuring SEM campaigns not only meet but exceed expectations. In the rapidly evolving world of SEM, AI and machine learning are not just options but necessities.

Strategies for successful implementation

Leveraging AI and machine learning can revolutionise your campaigns | News by Thaiger
This photo was generated using Dall-E

In the evolving landscape of search engine marketing (SEM), leveraging AI and machine learning can set a campaign apart, maximising efficiency and returns. Below are strategies detailing how to integrate these advanced technologies effectively.

Choosing the right AI tools for SEM

In the realm of SEM, it is critical to select AI tools that are congruent with your marketing objectives. The market is replete with a myriad of options, each purporting to transform your SEM strategies radically. Nonetheless, not every tool offers equal value. It is advisable to opt for tools that provide an extensive analysis of keywords, insights into competitors, and capabilities for automated bid management. These functionalities ensure that your campaigns are both precisely targeted and economically efficient. Furthermore, the implementation of AI-driven tools for content optimisation can notably increase ad relevance, thereby enhancing click-through rates (CTR) and reducing cost per acquisition (CPA).

Conducting trials with various tools before finalizing a decision is imperative to identify a solution that is specifically catered to your requirements. Platforms offering advanced analytics should be given priority as they afford actionable insights critical for ongoing refinement. It is important to recognize that the effective use of AI in SEM transcends merely selecting cutting-edge technology; it encompasses the strategic application of these tools to continually refine and advance marketing strategies over time.

Integrating machine learning algorithms into SEM practices

Machine learning algorithms come in as a cornerstone in the advancement of search engine marketing (SEM) strategies. With this, businesses can gain insights into consumer behaviour and preferences and to capitalise on this, it will be important to integrate it.

Machine learning algorithms constitute a cornerstone in the advancement of Search Engine Marketing (SEM) strategies, offering unprecedented insights into consumer behaviour and preferences. To capitalize on this opportunity, it is essential to integrate machine learning SEM technologies, emphasizing predictive analytics. Such an approach enables a deeper understanding of the interactions between different demographics and your advertisements, thereby improving audience segmentation.

Moreover, machine learning capabilities enable the automation of the most labour-intensive tasks within SEM, including bid management and A/B testing. This automation not only conserves precious time but also markedly elevates the efficiency of marketing campaigns. By adapting SEM practices to incorporate these algorithms, advertisements are perpetually optimised for performance, obviating the need for continuous manual intervention.

The fusion of machine learning’s predictive analytics with AI-enabled creative optimisation represents a pivotal evolution in Search Engine Marketing (SEM) strategies. This integrative approach allows for the real-time modification of advertisement components, including imagery and text, to better match user intentions, thereby markedly enhancing campaign outcomes.

Employing machine learning and AI within SEM goes beyond simply embracing cutting-edge technology; it denotes an ongoing dedication to a cycle of testing, education, and improvement. This dedication positions marketing endeavours at the vanguard of innovation during a period marked by rapid digital change.

Measuring success and ROI

Leveraging AI and machine learning can revolutionise your campaigns | News by Thaiger
Photo by krakenimages on Unsplash

Utilising metrics and KPIs to evaluate AI and machine learning impact

The integration of Artificial Intelligence (AI) and Machine Learning (ML) into Search Engine Marketing (SEM) strategies has profoundly altered the approaches utilized by digital marketing experts.

  • For an accurate assessment of the effectiveness of these advanced SEM technologies, focusing on relevant metrics and Key Performance Indicators (KPIs) is essential.
  • These criteria provide a transparent evaluation of the performance enhancements brought about by AI and ML.
  • They enable organizations to measure success and calculate Return on Investment (ROI) with greater accuracy.

Primarily, conversion rates emerge as a crucial metric. They serve as direct indicators of the efficiency of AI-enhanced ad targeting and bid management strategies, reflecting whether such technological advancements result in an increased proportion of visitors performing desired actions, such as completing purchases or registering for newsletters.

Cost per Acquisition (CPA) represents another fundamental metric. It illustrates the effectiveness with which AI and ML tools manage advertising expenditures to secure new clientele. Reduced CPA values indicate that these advanced SEM technologies are not only pinpointing the appropriate audience but also achieving this in a financially prudent manner.

Click-through rates (CTR) hold significant importance as well. An elevated CTR signifies that the predictive analytics and automated content optimisation facilitated by AI are effectively engaging the target demographic, thereby increasing their propensity to interact with advertisements.

Moreover, Return on Ad Spend (ROAS) is an essential measure of overall operational efficacy. It quantifies the revenue generated for every unit of currency expended on SEM initiatives. An enhancement in ROAS denotes that integrating AI and ML into SEM strategies is yielding more lucrative campaigns.

Through meticulous observation of these metrics, organizations can comprehensively assess the impact of Artificial Intelligence (AI) and Machine Learning (ML) on their Search Engine Marketing (SEM) strategies. This analysis highlights not only the achievement of set goals but also identifies potential areas for enhancement. As AI and ML evolve, securing a competitive advantage in SEM requires ongoing vigilance and an adaptable methodology informed by data-driven insights.

Utilising machine learning and AI is pretty important in the pursuit of finding success in digital marketing. However, SEM is just one aspect of marketing that stands shoulder to shoulder with methods like SEO. Knowing the difference between these two will help determine which one to use or utilise together to have a more prosperous digital marketing campaign.

Feature Image Credit: This photo was generated using Dall-E

By Alessio Francesco Fedeli

Graduating from Webster University with a degree of Management with an emphasis on International Business, Alessio is a Thai-Italian with a multicultural perspective regarding Thailand and abroad. On the same token, as a passionate person for sports and activities, Alessio also gives insight to various spots for a fun and healthy lifestyle.

Sourced from Thaiger

By

Unstructured text and data are like gold for business applications and the company bottom line, but where to start? Here are three tools worth a look.

Developers and data scientists use generative AI and large language models (LLMs) to query volumes of documents and unstructured data. Open source LLMs, including Dolly 2.0, EleutherAI Pythia, Meta AI LLaMa, StabilityLM, and others, are all starting points for experimenting with artificial intelligence that accepts natural language prompts and generates summarized responses.

“Text as a source of knowledge and information is fundamental, yet there aren’t any end-to-end solutions that tame the complexity in handling text,” says Brian Platz, CEO and co-founder of Fluree. “While most organizations have wrangled structured or semi-structured data into a centralized data platform, unstructured data remains forgotten and underleveraged.”

If your organization and team aren’t experimenting with natural language processing (NLP) capabilities, you’re probably lagging behind competitors in your industry. In the 2023 Expert NLP Survey Report, 77% of organizations said they planned to increase spending on NLP, and 54% said their time-to-production was a top return-on-investment (ROI) metric for successful NLP projects.

Use cases for NLP

If you have a corpus of unstructured data and text, some of the most common business needs include

  • Entity extraction by identifying names, dates, places, and products
  • Pattern recognition to discover currency and other quantities
  • Categorization into business terms, topics, and taxonomies
  • Sentiment analysis, including positivity, negation, and sarcasm
  • Summarizing the document’s key points
  • Machine translation into other languages
  • Dependency graphs that translate text into machine-readable semi-structured representations

Sometimes, having NLP capabilities bundled into a platform or application is desirable. For example, LLMs support asking questions; AI search engines enable searches and recommendations; and chatbots support interactions. Other times, it’s optimal to use NLP tools to extract information and enrich unstructured documents and text.

Let’s look at three popular open source NLP tools that developers and data scientists are using to perform discovery on unstructured documents and develop production-ready NLP processing engines.

Natural Language Toolkit

The Natural Language Toolkit (NLTK), released in 2001, is one of the older and more popular NLP Python libraries. NLTK boasts more than 11.8 thousand stars on GitHub and lists over 100 trained models.

“I think the most important tool for NLP is by far Natural Language Toolkit, which is licensed under Apache 2.0,” says Steven Devoe, director of data and analytics at SPR. “In all data science projects, the processing and cleaning of the data to be used by algorithms is a huge proportion of the time and effort, which is particularly true with natural language processing. NLTK accelerates a lot of that work, such as stemming, lemmatization, tagging, removing stop words, and embedding word vectors across multiple written languages to make the text more easily interpreted by the algorithms.”

NLTK’s benefits stem from its endurance, with many examples for developers new to NLP, such as this beginner’s hands-on guide and this more comprehensive overview. Anyone learning NLP techniques may want to try this library first, as it provides simple ways to experiment with basic techniques such as tokenization, stemming, and chunking.

spaCy

spaCy is a newer library, with its version 1.0 released in 2016. spaCy supports over 72 languages and publishes its performance benchmarks, and it has amassed more than 25,000 stars on GitHub.

“spaCy is a free, open-source Python library providing advanced capabilities to conduct natural language processing on large volumes of text at high speed,” says Nikolay Manchev, head of data science, EMEA, at Domino Data Lab. “With spaCy, a user can build models and production applications that underpin document analysis, chatbot capabilities, and all other forms of text analysis. Today, the spaCy framework is one of Python’s most popular natural language libraries for industry use cases such as extracting keywords, entities, and knowledge from text.”

Tutorials for spaCy show similar capabilities to NLTK, including named entity recognition and part-of-speech (POS) tagging. One advantage is that spaCy returns document objects and supports word vectors, which can give developers more flexibility for performing additional post-NLP data processing and text analytics.

Spark NLP

If you already use Apache Spark and have its infrastructure configured, then Spark NLP may be one of the faster paths to begin experimenting with natural language processing. Spark NLP has several installation options, including AWS, Azure Databricks, and Docker.

“Spark NLP is a widely used open-source natural language processing library that enables businesses to extract information and answers from free-text documents with state-of-the-art accuracy,” says David Talby, CTO of John Snow Labs. “This enables everything from extracting relevant health information that only exists in clinical notes, to identifying hate speech or fake news on social media, to summarizing legal agreements and financial news.

Spark NLP’s differentiators may be its healthcare, finance, and legal domain language models. These commercial products come with pre-trained models to identify drug names and dosages in healthcare, financial entity recognition such as stock tickers, and legal knowledge graphs of company names and officers.

Talby says Spark NLP can help organizations minimize the upfront training in developing models. “The free and open source library comes with more than 11,000 pre-trained models plus the ability to reuse, train, tune, and scale them easily,” he says.

Best practices for experimenting with NLP

Earlier in my career, I had the opportunity to oversee the development of several SaaS products built using NLP capabilities. My first NLP was an SaaS platform to search newspaper classified advertisements, including searching cars, jobs, and real estate. I then led developing NLPs for extracting information from commercial construction documents, including building specifications and blueprints.

When starting NLP in a new area, I advise the following:

  • Begin with a small but representable example of the documents or text.
  • Identify the target end-user personas and how extracted information improves their workflows.
  • Specify the required information extractions and target accuracy metrics.
  • Test several approaches and use speed and accuracy metrics to benchmark.
  • Improve accuracy iteratively, especially when increasing the scale and breadth of documents.
  • Expect to deliver data stewardship tools for addressing data quality and handling exceptions.

You may find that the NLP tools used to discover and experiment with new document types will aid in defining requirements. Then, expand the review of NLP technologies to include open source and commercial options, as building and supporting production-ready NLP data pipelines can get expensive. With LLMs in the news and gaining interest, underinvesting in NLP capabilities is one way to fall behind competitors. Fortunately, you can start with one of the open source tools introduced here and build your NLP data pipeline to fit your budget and requirements.

Feature Image Credit: TippaPatt/Shutterstock

By

Isaac Sacolick is president of StarCIO and the author of the Amazon bestseller Driving Digital: The Leader’s Guide to Business Transformation through Technology and Digital Trailblazer: Essential Lessons to Jumpstart Transformation and Accelerate Your Technology Leadership. He covers agile planning, devops, data science, product management, and other digital transformation best practices. Sacolick is a recognized top social CIO and digital transformation influencer. He has published more than 900 articles at InfoWorld.com, CIO.com, his blog Social, Agile, and Transformation, and other sites.

Sourced from InfoWorld

By

In this post, you will learn to clarify business problems & constraints, understand problem statements, select evaluation metrics, overcome technical challenges, and design high-level systems.

LinkedIn feed is the starting point for millions of users on this website and it builds the first impression for the users, which, as you know, will last. Having an interesting personalized feed for each user will deliver LinkedIn’s most important core value which is to keep the users connected to their network and their activities and build professional identity and network.

LinkedIn’s Personalized Feed offers users the convenience of being able to see the updates from their connections quickly, efficiently, and accurately. In addition to that, it filters out your spammy, unprofessional, and irrelevant content to keep you engaged. To do this, LinkedIn filters your newsfeed in real-time by applying a set of rules to determine what type of content belongs based on a series of actionable indicators & predictive signals. This solution is powered by Machine Learning and Deep Learning algorithms.

In this article, we will cover how LinkedIn uses machine learning to feed the user’s rank. We will follow the workflow of a conventional machine learning project as covered in these two articles before:

The machine learning project workflow starts with the business problem statement and defining the constraints. Then it is followed by data collection and data preparation. Then modeling part, and finally, the deployment and putting the model into production. These steps will be discussed in the context of ranking the LinkedIn feed.

How LinkedIn Uses Machine Learning To Rank Your Feed 

LinkedIn / Photo by Alexander Shatov on Unsplash

1. Clarify Business Problems & Constraints

1.1. Problem Statement

Designing a personalized LinkedIn feed to maximize the long-term engagement of the user. Since the LinkedIn feed should provide beneficial professional content for each user to increase his long-term engagement. Therefore it is important to develop models that eliminate low-quality content and leave only high-quality professional content. However, it is important, not overzealous about filtering content from the feed, or else it will end up with a lot of false positives. Therefore we should aim for high precision and recall for the classification models.

We can measure user engagement by measuring the click probability or known as the ClickThroughRate (CTR). On the LinkedIn feed, there are different activities, and each activity has a different CTR; this should be taken into consideration when collecting data and training the models. There are five main activity types:

  • Building connections: Member connects or follows another member or company, or page.
  • Informational: Sharing posts, articles, or pictures
  • Profile-based activity: Activities related to the profile, such as changing the profile picture, adding a new experience, changing the profile header, etc.
  • Opinion-specific activity: Activities that are related to member opinions such as likes or comments or reposting a certain post, article, or picture.
  • Site-specific activity: Activities that are specific to LinkedIn such as endorsement and applying for jobs.

1.2. Evaluation Metrics Design

There are two main types of metrics: offline and online evaluation metrics. We use offline metrics to evaluate our model during the training and modeling phase. The next step is to move to a staging/sandbox environment to test for a small percentage of the real traffic. In this step, the online metrics are used to evaluate the impact of the model on the business metrics. If the revenue-related business metrics show a consistent improvement, it will be safe to expose the model to a larger percentage of the real traffic.

Offline Metrics

Maximizing CTR can be formalized as training a supervised binary classifier model. Therefore for the offline metrics, the normalized cross entropy can be used since it helps the model to be less sensitive to background CTR:

 

How LinkedIn Uses Machine Learning To Rank Your Feed 

 

Online Metrics

Since the online metrics should reflect the level of engagement of users when the model is deployed, we can use the conversion rate, which is the ratio of clicks per feed.

1.3. Technical Requirements

The technical requirements will be divided into two main categories: during training and during inference. The technical requirements during training are:

  • Large training set: One of the main requirements during training is to be able to handle the large training dataset. This requires distributed training settings.
  • Data shift: In social networks, it is very common to have a data distribution shift from offline training data to online data. A possible solution to this problem is to retrain the models incrementally multiple times per day.

The technical requirements during inference are:

  • Scalability: To be able to serve customized user feeds for more than 300 million users.
  • Latency: It is important to have short latency to be able to provide the users with the ranked feed in less than 250 ms. Since multiple pipelines need to pull data from numerous sources before feeding activities into the ranking models, all these steps need to be done within 200 ms. Therefore the
  • Data freshness: It is important that the models be aware of what the user had already seen, else the feeds will show repetitive content, which will decrease user engagement. Therefore the data needs to run really fast.

1.4. Technical challenges

There are four main technical challenges:

  • Scalability: One of the main technical challenges is the scalability of the system. Since the number of LinkedIn users that need to be served is extremely large, around 300 million users. Every user, on average, sees 40 activities per visit, and each user visits 10 times per month on average. Therefore we have around 120 billion observations or samples.
  • Storage: Another technical challenge is the huge data size. Assume that the click-through rate is 1% each month. Therefore the collected positive data will be about 1 billion data points, and the negative labels will be 110 billion negatives. We can assume that for every data point, there are 500 features, and for simplicity of calculation, we can assume every row of features will need 500 bytes to be stored. Therefore for one month, there will be 120 billion rows, each of 500 bytes therefore, the total size will be 60 Terabytes. Therefore we will have to only keep the data of the last six months or the last year in the data lake and archive the rest in cold storage.
  • Personalization: Another technical challenge will be personalization since you will have different users to serve with different interests so you need to make sure that the models are personalized for each user.
  • Content Quality Assessment: Since there is no perfect classifier. Therefore some of the content will fall into a gray zone where even two humans can have difficulty agreeing on whether or not it’s appropriate content to show to the users. Therefore it became important to combine man+machine solutions for content quality assessment.

2. Data Collection

Before training the machine learning classifier, we first need to collect labeled data so that the model can be trained and evaluated. Data collection is a critical step in data science projects as we need to collect representative data of the problem we are trying to solve and to be similar to what is expected to be seen when the model is put into production. In this case study, the goal is to collect a lot of data across different types of posts and content, as mentioned in subsection 1.1.

The labeled data we would like to collect, in our case, will click or not click labeled data from the user’s feeds. There are three main approaches to do collect click and no-click data:

  • Rank user’s feed chronically: The data will be collected from the user feed, which will be ranked chronically. This approach can be used to collect the data. However, it will be based on the user’s attention will be attracted to the first few feeds. Also, this approach will induce a data sparsity problem as some activities, such as job changes, rarely happen compared to other activities, so they will be underrepresented in your data.
  • Random serving: The second approach will be randomly serving the feed and collecting click and no click data. This approach is not preferred as it will lead to a bad user experience and non-representative data, and also it does not help with the data sparsity problem.
  • Use an algorithm to rank the feed: The last approach we can use is to use an algorithm to rank the user’s feed and then use permutation to randomly shuffle the top feeds. This will provides some randomness to the feed and will help to collect data from different activities.

3. Data Preprocessing & Feature Engineering

The third step will be preparing the data for the modeling step. This step includes data cleaning, data preprocessing, and feature engineering. Data cleaning will deal with missing data, outliers, and noisy text data. Data preprocessing will include standardization or normalization, handling text data, dealing with imbalanced data, and other preprocessing techniques depending on the data. Feature Engineering will include feature selection and dimensionality reduction. This step mainly depends on the data exploration step as you will gain more understanding and will have better intuition about the data and how to proceed in this step.

The features that can be extracted from the data are:

  • User profile features: These features include job title, user industry, demographic, education, previous experience, etc. These features are categorical features, so they will have to be converted into numerical as most of the models cannot handle categorical features. For higher cardinality, we can use feature embeddings, and for lower cardinality, we can use one hot encoding.
  • Connection strength features: These features represent the similarities between users. We can use embeddings for users and measure the distance between them to calculate the similarity.
  • Age of activity features: These features represent the age of each activity. This can be handled as a continuous feature or can be binned depending on the sensitivity of the click target.
  • Activity features: These features represent the type of activity. Such as hashtags, media, posts, and so on. These features will also be categorical, and also as before, they have to be converted into numerical using feature embeddings or one hot encoding depending on the level of cardinality.
  • Affinity features: These features represent the similarity between users and activities.
  • Opinion features: These features represent the user’s likes/comments on posts, articles, pictures, job changes,s and other activities.

Since the CTR is usually very small (less than 1%) it will result in an imbalanced dataset. Therefore a critical step in the data preprocessing phase is to make sure that the data is balanced. Therefore we will have to resample the data to increase the under-represented class.

However, this should be done only to the training set and not to the validation and testing set, as they should represent the data expected to be seen in production.

4. Modeling

Now the data is ready for the modeling part, it is time to select and train the model. As mentioned, this is a classification problem, with the target value in this classification problem being the click. We can use the Logistic Regression model for this classification task. Since the data is very large, then we can use distributed training using logistic regression in Spark or using the Method of Multipliers.

We can also use deep learning models in distributed settings. In which the fully connected layers will be used with the sigmoid activation function applied to the final layers.

For evaluation, we can follow two approaches the first is the conventional splitting of the data into training and validation sets. Another approach to avoid biased offline evaluation is to use replayed evaluation as the following:

  • Assume we have training data up to time point T. The validation data will start from T+1, and we will order their ranking using the trained model.
  • Then the output of the model is compared with the actual click, and the number of matched predicted clicks is calculated.

There are a lot of hyperparameters to be optimized one of them is the size of training data and the frequency of retaining the model. To keep the model updated, we can fine-tune the existing deep learning model with training data of the recent six months, for example.

5. High-Level Design

We can summarize the whole process of the feed ranking with this high-level design shown in figure 1.

Let’s see how the flow of the feed ranking process occurs, as shown in the figure below:

  • When the user visits the LinkedIn homepage, requests are sent to the Application server for feeds.
  • The Application server sends feed requests to the Feed Service.
  • Feed Service then gets the latest model from the model store and the right features from the Feature Store.
  • Feature Store: Feature store, stores the feature values. During inference, there should be low latency to access features before scoring.
  • Feed Service receives all the feeds from the ItemStore.
  • Item Store: Item store stores all activities generated by users. In addition to that, it also stores the models for different users. Since it is important to maintain a consistent user experience by providing the same feed rank method for each user. ItemStore provides the right model for the right users.
  • Feed Service will then provide the model with the features to get predictions. The feed service here represents both the retrieval and ranking service for better visualization.
  • The model will return the feeds ranked by CTR likelihood which is then returned to the application server.

 

How LinkedIn Uses Machine Learning To Rank Your Feed 

Figure 1. LinkedIn feed ranking high-level design.

To scale the feed ranking system, we can put a Load Balancer in front of the Application Servers. This will balance and distribute the load among the several application servers in the system.

 

How LinkedIn Uses Machine Learning To Rank Your Feed 

Figure 2. The scaled LinkedIn feed ranking high-level design.

6. References

  1. Strategies for Keeping the LinkedIn Feed Relevant
  2. Machine Learning Design Interview

By

Youssef Hosni is Co-Founder at Elfehres, Ph.D. Researcher – Computer vision, and Data Scientist

Sourced from KDnuggets

By Sharon Goldman

More than ever, organizations are putting their confidence – and investment – into the potential of artificial intelligence (AI) and machine learning (ML).

According to the 2022 IBM Global AI Adoption Index, 35% of companies report using AI today in their business, while an additional 42% say they are exploring AI. Meanwhile, a McKinsey survey found that 56% of respondents reported they had adopted AI in at least one function in 2021, up from 50% in 2020.

“We know that we cannot change the diagnosis, but we can help change the outcome”- Cigna C-suite Executives Discuss the Impact of AI and Digital Interactions in Transforming the Health of Their Customers 1

But can investments in AI deliver true ROI that directly impacts a company’s bottom line?

According to Domino Data Lab’s recent REVelate survey, which surveyed attendees at New York City’s Rev3 conference in May, many respondents seem to think so. Nearly half, in fact, expect double-digit growth as a result of data science. And 4 in 5 respondents (79%) said that data science, ML and AI are critical to the overall future growth of their company, with 36% calling it the single most critical factor.

Implementing AI, of course, is no easy task. Other survey data shows another side of the confidence coin. For example, recent survey data by AI engineering firm CognitiveScale finds that, although execs know that data quality and deployment are critical success factors for successful app development to drive digital transformation, more than 76% aren’t sure how to get there in their target 12–18 month window. In addition, 32% of execs say that it has taken longer than expected to get an AI system into production.

AI must be accountable

ROI from AI is possible, but it must be accurately described and personified according to a business goal, Bob Picciano, CEO of Cognitive Scale, told VentureBeat.

“If the business goal is to get more long-range prediction and increased prediction accuracy with historical data, that’s where AI can come into play,” he said. “But AI has to be accountable to drive business effectiveness – it’s not sufficient to say a ML model was 98% accurate.”

Instead, the ROI could be, for example, that in order to improve call centre effectiveness, AI-driven capabilities ensure that the average call handling time is reduced.

“That kind of ROI is what they talk about in the C-suite,” he explained. “They don’t talk about whether the model is accurate or robust or drifting.”

Shay Sabhikhi, cofounder and COO at Cognitive Scale, added that he’s not surprised by the fact that 76% of respondents reported having trouble scaling their AI efforts. “That’s exactly what we’re hearing from our enterprise clients,” he said. One problem is friction between data science teams and the rest of the organization, he explained, that doesn’t know what to do with the models that they develop.

“Those models may have potentially the best algorithms and precision recall, but sit on the shelf because they literally get thrown over to the development team that then has to scramble, trying to assemble the application together,” he said.

At this point, however, organizations have to be accountable for their investments in AI because AI is no longer a series of science experiments, Picciano pointed out. “We call it going from the lab to life,” he said. “I was at a chief data analytics officer conference and they all said, how do I scale? How do I industrialize AI?”

Is ROI the right metric for AI?

However, not everyone agrees that ROI is even the best way to measure whether AI drives value in the organization. According to Nicola Morini Bianzino, global chief technology officer, EY, thinking of artificial intelligence and the enterprise in terms of “use cases” that are then measured through ROI is the wrong way to go about AI.

“To me, AI is a set of techniques that will be deployed pretty much everywhere across the enterprise – there is not going to be an isolation of a use case with the associated ROI analysis,” he said.

Instead, he explained, organizations simply have to use AI – everywhere. “It’s almost like the cloud, where two or three years ago I had a lot of conversations with clients who asked, ‘What is the ROI? What’s the business case for me to move to the cloud?’ Now, post-pandemic, that conversation doesn’t happen anymore. Everybody just says, ‘I’ve got to do it.’”

Also, Bianzino pointed out, discussing AI and ROI depends on what you mean by “using AI.”

“Let’s say you are trying to apply some self-driving capabilities – that is, computer vision as a branch of AI,” he said. “Is that a business case? No, because you cannot implement self-driving without AI.” The same is true for a company like EY, which ingests massive amounts of data and provides advice to clients – which can’t be done without AI. “It’s something that you cannot isolate away from the process – it’s built into it,” he said.

In addition, AI, by definition, is not productive or efficient on day one. It takes time to get the data, train the models, evolve the models and scale up the models. “It’s not like one day you can say, I’m done with the AI and 100% of the value is right there – no, this is an ongoing capability that gets better in time,” he said. “There is not really an end in terms of value that can be generated.”

In a way, Bianzino said, AI is becoming part of the cost of doing business. “If you are in a business that involves data analysis, you cannot not have AI capabilities,” he explained. “Can you isolate the business case of these models? It is very difficult and I don’t think it’s necessary. To me, it’s almost like it’s a cost of the infrastructure to run your business.”

ROI of AI is hard to measure

Kjell Carlsson, head of data science strategy and evangelism at enterprise MLops provider Domino Data Lab, says that at the end of the day, what organizations want is a measure of the business impact of ROI – how much it contributed to the bottom line. But one problem is that this can be quite disconnected from how much work has gone into developing the model.

“So if you create a model which improves click-through conversion by a percentage point, you’ve just added several million dollars to the bottom line of the organization,” he said. “But you could also have created a good predictive maintenance model which helped give advance warning to a piece of machinery needing maintenance before it happens.” In that case, the dollar-value impact to the organization could be entirely different, “even though one of them might end up being a much harder problem,” he added.

Overall, organizations do need a “balanced scorecard” where they are tracking AI production. “Because if you’re not getting anything into production, then that’s probably a sign that you’ve got an issue,” he said. “On the other hand, if you are getting too much into production, that can also be a sign that there’s an issue.”

For example, the more models data science teams deploy, the more models they’re on the hook for managing and maintaining, he explained. “So [if] you deployed this many models in the last year, so you can’t actually undertake these other high-value ones that are coming your way,” he said.

But another issue in measuring the ROI of AI is that for a lot of data science projects, the outcome isn’t a model that goes into production. “If you want to do a quantitative win-loss analysis of deals in the last year, you might want to do a rigorous statistical investigation of that,” he said. “But there’s no model that would go into production, you’re using the AI for the insights you get along the way.”

Data science activities must be tracked

Still, organizations can’t measure the role of AI if data science activities aren’t tracked. “One of the problems right now is that so few data science activities are really being collected and analysed,” said Carlsson. “If you ask folks, they say they don’t really know how the model is performing, or how many projects they have, or how many CodeCommits your data scientists have made within the last week.”

One reason for that is the very disconnected tools data scientists are required to use. “This is one of the reasons why Git has become all the more popular as a repository, a single source of truth for your data scientist in an organization,” he explained. MLops tools such as Domino Data Lab offer platforms that support these different tools. “The degree to which organizations can create these more centralized platforms … is important,” he said.

AI outcomes are top of mind

Wallaroo CEO and founder Vid Jain spent close to a decade in the high-frequency trading business in Merrill Lynch, where his role, he said, was to deploy ML at scale and do so with a positive ROI.

The challenge was not actually developing the data science, cleansing the data or building the trade repositories, now called data lakes. By far, the biggest challenge was taking those models, operationalizing them and delivering the business value, he said.

“Delivering the ROI turns out to be very hard – 90% of these AI initiatives don’t generate their ROI, or they don’t generate enough ROI to be worth the investment,” he said. “But this is top of mind for everybody. And the answer is not one thing.”

A fundamental issue is that many assume that operationalizing ML is not much different than operationalizing a standard kind of application, he explained, adding that there is a big difference, because AI is not static.

“It’s almost like tending a farm, because the data is living, the data changes and you’re not done,” he said. “It’s not like you build a recommendation algorithm and then people’s behaviour of how they buy is frozen in time. People change how they buy. All of a sudden, your competitor has a promotion. They stop buying from you. They go to the competitor. You have to constantly tend to it.”

Ultimately, every organization needs to decide how they will align their culture to the end goal around implementing AI. “Then you really have to empower the people to drive this transformation, and then make the people that are critical to your existing lines of business feel like they’re going to get some value out of the AI,” he said.

Most companies are still early in that journey, he added. “I don’t think most companies are there yet, but I’ve certainly seen over the last six to nine months that there’s been a shift towards getting serious about the business outcome and the business value.”

ROI of AI remains elusive

But the question of how to measure the ROI of AI remains elusive for many organizations. “For some there are some basic things, like they can’t even get their models into production, or they can but they’re flying blind, or they are successful but now they want to scale,” Jain said. “But as far as the ROI, there is often no P&L associated with machine learning.”

Often, AI initiatives are part of a Centre of Excellence and the ROI is grabbed by the business units, he explained, while in other cases it’s simply difficult to measure.

“The problem is, is the AI part of the business? Or is it a utility? If you’re a digital native, AI might be part of the fuel the business runs on,” he said. “But in a large organization that has legacy businesses or is pivoting, how to measure ROI is a fundamental question they have to wrestle with.”

By Sharon Goldman

Sourced from VentureBeat

Data science is a new interdisciplinary field of research that focuses on extracting value from data, integrating knowledge and methods from computer science, mathematics and statistics, and an application domain. Machine learning is the field created at the intersection of computer science and statistics, and it has many applications in data science when the application domain is taken into consideration.

From a historical perspective, machine learning was considered, for the past 50 years or so, as part of artificial intelligence. It was taught mainly in computer science departments to scientists and engineers and the focus was placed, accordingly, on the mathematical and algorithmic aspects of machine learning, regardless of the application domain. Thus, although machine learning deals also with statistics, which focuses on data and does consider the application domain, up until recently, most machine learning activities took place in the context of computer science, where it began, and which focuses traditionally on algorithms.

Two processes, however, have taken place in parallel to the accelerated growth of data science in the last decade. First, machine learning, as a sub-field of data science, flourished and its implementation and use in a variety of disciplines began. As a result, researchers realized that the application domain cannot be neglected and that it should be considered in any data science problem-solving situation. For example, it is essential to know the meaning of the data in the context of the application domain to prepare the data for the training phase and to evaluate the algorithm’s performance based on the meaning of the results in the real world. Second, a variety of population began taking machine learning courses, people for whom, as experts in their disciplines, it is inherent and essential to consider the application domain in data science problem-solving processes.

Teaching machine learning to such a vast population, while neglecting the application domain as it is taught traditionally in computer science departments, is misleading. Such a teaching approach guides learners to ignore the application domain even when it is relevant for the modelling phase of data science, in which machine learning is largely used. In other words, when students learn machine learning without considering the application domain, they may get the impression that machine learning should be applied this way and become accustomed to ignoring the application domain. This habit of mind may, in turn, influence their future professional decision-making processes.

For example, consider a researcher in the discipline of social work who took a machine learning course but was not educated to consider the application domain in the interpretation of the data analysis. The researcher is now asked to recommend an intervention program. Since the researcher was not educated to consider the application domain, he or she may ignore crucial factors in this examination and rely only on the recommendation of the machine learning algorithm.

Other examples are education and transportation, fields that everyone feels they understand. As a result of a machine learning education that does not consider the application domain, non-experts in these fields may assume that they have enough knowledge in these fields, and may not understand the crucial role that professional knowledge in these fields plays in decision-making processes that are based on the examination of the output of machine learning algorithms. This phenomenon is further highlighted when medical doctors or food engineers, for example, are not trained or educated in machine learning courses to criticize the results of machine learning algorithms based on their professionalism in medicine and food engineering, respectively.

We therefore propose to stop teaching machine learning courses to populations whose core discipline is neither computer science nor mathematics and statistics. Instead, these populations should learn machine learning only in the context of data science, which repeatedly highlights the relevance of the application domain in each stage of the data science lifecycle and, specifically, in the modelling phase in which machine learning plays an important role.

If our suggestion, to offer machine learning courses in a variety of disciplines only in the context of data science, is accepted, not only will the interdisciplinarity of data science be highlighted, but the realization that the application domain cannot be neglected in data science problem-solving processes will also be further illuminated.

Don’t teach machine learning! Teach data science!

Orit Hazzan is a professor in the Technion’s Department of Education in Science and Technology; her research focuses on computer science, software engineering, and data science education. Koby Mike is a Ph.D. student at the Technion’s Department of Education in Science and Technology; his research focuses on data science education.

Sourced from Communications of the ACM

By Paul Towler

It can be tricky to follow the latest “tools of the trade” regarding online marketing strategies. The importance of local SEO, the rise of machine learning and customised content all represent trending topics.

However, what about the so-called “metaverse”? Might this realm represent the next leap forward with effective advertising? Before we look at five unique opportunities within this field, it is a good idea to take a closer look at exactly what the metaverse is.

What Exactly is the Metaverse?


Perhaps the simplest way to define the metaverse involves the concept of a three-dimensional social media community. This type of virtual reality allows members to interact with one another. However, this is also a much broader concept.

The metaverse essentially represents the numerous ways in which users communicate with one another across the digital community. This can include online role-playing games, smartphone applications and even the ability to buy and sell goods online.

We can now see that there is more than one way to define the metaverse. Perhaps this concept essentially involves more efficient ways of connecting with users. So, it makes perfect sense that marketers have become interested in what it can offer? Let’s now look at some potential opportunities for success. 

  1. A Much Broader Reach


    As this article rightfully observes, the metaverse is more of a concept than a reality at the moment. However, marketers can still take advantage of its potential. Similar to social media channels, campaigns within this digital “ether” are thought to be capable of reaching a much wider audience; particularly millennials. As this world becomes ever more interconnected, it should be possible to employ a single advertising campaign to reach a wide range of consumers. 

  2. The Immersive Nature of the Metaverse


    One of the pitfalls that marketing experts are likely to experience from time to time involves keeping the attention of a fickle audience. After all, the online community is laden with advertisements. This has caused many consumers to simply ignore these campaigns entirely; even if they happen to be offering truly unique products or services.

    Thankfully, things may soon be able to change thanks to the presence of the metaverse. We need to remember that this environment will provide a much more immersive means to communicate with others. Therefore, it may be possible to create advertising campaigns that allow users to interact with certain elements. Here are some interesting possibilities:

  • Changing the colour of an object.
  • Obtaining 360-degree views of what is being offered.
  • Clicking on specific portions of an advertisement to be taken to different pages of an embedded website.

The bottom line is that keeping the attention of potential clients is one of the best ways to ensure a conversion. 

  1. A New Type of Social Media Influencer


    Avatars are also predicted to play an important role within the metaverse. This is due to the inherently social nature of such a digital society. So, individuals will likely be given the opportunities to create their avatars when communicating with others. Why should this be any different for businesses?

    We once again return to the decidedly personal side of marketing. Individuals do not wish to be thought of as consumers, but rather as people who share common goals and interests. Developing a branded avatar will provide businesses with a much more targeted (and even organic) means to promote what it is that they have to offer.

    From promoting virtual fashion exhibitions and advertising digital dancehalls to providing exclusive deals to other virtual “friends”, the possibilities are nearly limitless. 

  2. Taking the Notion of Online Sales to the Next Level


    In the past, digital sales were somewhat limited by the technology at their disposal. While it was possible to examine products in minute detail and to select certain options during the buying process, this was hardly the same as visiting a physical retail outlet.

    Once again, the metaverse is expected to rise to the occasion. As some facets of this world are set to be rooted within the world of augmented reality, the ability to truly interact with what is being offered could present some amazing marketing opportunities.

    Might it soon be possible to take a car for a digital “test drive” before committing to a purchase? Could users eventually be able to try on an item of clothing or virtually tour a home before it is even built? These are some of how the metaverse will change the entire notion of marketing. 

  3. What About “Meta Products?”


    This final concept is slightly strange and yet, it is also slated to have an impact on digital marketing. There could very well be options to create lines of virtual products to augment existing revenue streams. In fact, these strategies are already present. Examples of virtual goods include:

  • Avatars for a character within an MMORPG.
  • Virtual currencies.
  • Paid access to online events.
  • E-books and online distance learning courses.


When applied to the world of marketing, the value of meta products (such as free samples or discounts for the first hundred virtual shoppers) becomes very clear.

These are five opportunities for marketers who wish to take full advantage of what the metaverse is expected to become. Although the entire concept may appear a bit odd at first glance, many felt the same about social media during the early 2000s. Once again, it pays to think a few steps ahead of the competition.

By Paul Towler

Paul Towler is the technical operations director at SmartOffice, a software automation provider who helps companies with their document management systems.

By

Could the tech giants take control of the AI narrative and reduce choices for enterprises? Experts weighed the pros and cons in a recent online conference.

Artificial intelligence and machine learning requires huge amounts of processing capacity and data storage, making the cloud the preferred option. That raises the specter of a few cloud giants dominating AI applications and platforms. Could the tech giants take control of the AI narrative and reduce choices for enterprises?

Not necessarily, but with some caveats, AI experts emphasize. But the large cloud providers are definitely in a position to control the AI narrative from several perspectives.

That’s part of the consensus raised at a recent webcast hosted by New York University Center for the Future of Management and LMU institute for Strategy, Technology and Organization, joined by Daron Acemoglu, professor at MIT; Jacques Bughin, professor at the Solvay School of Economics and Management; and Raffaella Sadun, professor at Harvard Business School.

There’s more to AI than cloud. The complexity and diversity of AI applications go well beyond the cloud environments where they are run — and therefore reduce the dominance of a few cloud giants.

Certainly, “AI will require more capacity in storage, of the information flow,” says Bughin. At the same time, “cloud is only one part of the total pie of the platform. It’s part of infrastructure, but the platform layer is what you develop in house and through a third party. This integration is going to be hybrid, even more important than the cloud itself. Let’s be very clear, it’s not about operation, it’s a lot of algorithms, it’s a lot of different data, that integration piece, that will require system integration, architecture and design. That means that different types of firms will be involved in that work.”

What Bughin worries about more is the innovation potential from AI startups that may be squashed by larger players gobbling up smaller companies and startups through mergers and acquisitions. “Companies like the big internet or AI guys are going and buying a lot of very small and very clever AI firms.”

At the same time, Sadun points out that smaller companies may be in a better position to leverage AI innovations — but need help with training and education to prepare them. “This issue of who benefits from AI is really important,” she says. “On the one hand, we might think the smaller firms may be able to use these technologies more effectively, because they are more nimble, more agile. Companies that have already digital can exploit and scale AI.”

Where the large cloud providers may also make their dominance felt is in the monopolization of the data that feeds AI systems, says Acemoglu. Cloud architecture itself can be based on price-sensitive and competitive cloud services, he explains. “But the cloud architecture will not enable you to exploit data. The area, where I worry about the future of AI technologies are those that enable firms to monopolize data. That’s where firms have an oversized effect on the future direction of technology. That means a few people in a boardroom are going to determine where a technology’s going to go. We want more people focused and people-centric AI. That’s not going to be possible if a few firms that have a different business model dominate the future of technology. ”

The value of an AI-driven enterprise “does not reside in the cloud that enables it,” Bughin believes. “I think there’s enough of competition for the price point not to destroy the value. The value will come from the fact that you have integrated these technologies where you work, and the way your company works, in your own back end. The back end is not going to be the battlefield. The value is from generating productivity and revenue, at a rate faster than what we’ve seen in traditional digital transformations.”

And, for the first time, we see the terms traditional and digital transformation used together in the same sentence. As these thought leaders relate, such transformations are moving to the next phase, enabling autonomous, software-driven operations and innovation through AI. It’s a question of whether large tech vendors control the momentum, or if it remains a market and practice with a diversity of choices. Stay tuned.

Feature Image Credit: Joe McKendrick

By

Sourced from ZDNet

By Anil Gupta

Artificial intelligence and machine learning are among the top marketing buzzwords we come across in the field of digital marketing. These technologies have already become an integral part of digital marketing and are being leveraged to make campaigns more personable and efficient.

For instance, artificial intelligence can make personalization easy and quick by creating accurate buyer personas. These personas are auto-generated to deliver a holistic audience segmentation, thereby improving the effectiveness of the campaigns. In addition, Netflix, Google, Uber, Spotify, Pinterest, and other apps use machine learning to personalize individual accounts and make relevant recommendations to their users.

The ever-improving algorithms and the exponential growth of data are encouraging business leaders and marketers to use AI, in the form of machine learning, natural language processing (NLP), deep learning, and other technologies. These technologies are helping them improve customer experience and conversions.

A Gartner survey shows that 37% of organizations are applying AI in some form or the other to boost their digital performance.

This post highlights how AI and ML are proving to be game-changers in the digital marketing realm.

1. Offer a Better Understanding of the Audience

Great content starts with knowing the audience well. When a business knows its target audience, the connection feels more natural and relevant. That genuine connection goes a long way in building lasting relationships with customers.

In recent years, AI and ML have opened up a whole new world of possibilities for understanding audience behavior. AI tools and data-driven insights are helping businesses understand who they are reaching, what the customers want and need, when to communicate, and where to reach them.

Artificial intelligence helps marketers instantly define buyer personas. Then, platforms like Socialbakers auto-generate these personas to deliver more holistic audience segmentation in the form of actionable insights. These insights help content marketers share inspiring stories that convert.

Keeping your audience at the centre of your online strategies is critical to business success. AI can help by offering unique audience insights, enabling businesses to deliver an integrated brand experience through relevant content. It also helps in selecting the most trustworthy and effective influencers for the brand.

2. Help with Lead Management

Big data, predictive analytics, and machine learning are being increasingly used in business intelligence these days. Machine learning, with its ability to bring out valuable hidden insights from large data sets, can create tangible value for businesses.

Leads are the driving force for businesses. They are the ones who will soon contribute to the organizational revenue. Hence, business leaders spend a significant amount of time in lead management. ML can be leveraged to improve and scale a firm’s approach to lead management, thereby boosting the bottom line. It helps firms generate better leads, qualify and nurture them, and ultimately monetize them effectively.

For instance, ML can help you create an ideal customer profile (ICP) to reach the best customers. ICP takes a structured look at the demographics and psychographics of an individual and determines their purchase intent and the content that matters to them. Thus, ICP can be used for lead scoring, allowing marketers to prioritize targeted accounts.

ML can also help firms generate more qualified leads from the traffic already coming to the site. For example, check out how Drift, a revenue acceleration platform, uses conversational AI to recognize quality from noise, learn from the conversations, and automatically qualify or disqualify website visitors. These qualifiers help the sales team focus on leads that are ready for conversions.

3. Curate and Create Better Content

AI is changing the game for content marketers. The technology is being used to automatically generate content for simple stories like sports news or stock market updates. AI also allows social channels to customize user new feeds.

But one content field where AI is increasingly applied is content curation. AI algorithms make it easier to collect target audience data to create relevant content at each stage of the marketing funnel.

For instance, the algorithms collect data on what the audience prefers to read, the questions they want answers to, or any specific concerns. Using this data, content marketers can curate and create relevant content that boosts customer experience and ultimately leads to conversions.

The North Face uses an AI-powered technology like IBM Watson that recreates shopping experiences. The AI tool uses cognitive computing that brings the online and in-store experiences closer together.

Besides, machine learning feeds content strategies by discovering fresh research-based content ideas, identifying the top-performing topic clusters, showing the most relevant keywords in a specific niche.

For instance, Google Analytics and SEMrush operate on machine-learning algorithms that are useful in keyword research and discovery, and content distribution. In addition, these tools can discover industry trends and show you ways to rank higher in SERP.

AI and ML-enabled tools improve the overall reception and performance of online content. In addition, the tools allow marketers to offer relevant and personalized digital experiences that positively influence engagement.

4. Help with Competitive Search Engine Ranking

Search engines are already using AI-enabled algorithms to deliver the most relevant SERP results. These algorithms rely on AI to understand the context of the content and spot irrelevant keywords. No wonder SEOs are constantly striving to understand these algorithms and coming up with strategies to create contextual, conceptual, and accurate content.

The placement of your business in the SERPs can make or break your online reputation and performance. AI technologies make it easier to create compelling content that answers the target audience’s queries, keywords, and phrases.

SEO isn’t a day’s job. It’s challenging, and the results of one’s efforts can only be seen after months. Fortunately, AI-based SEO tools help alleviate this stress. SEO optimization tools like Moz, WooRank, BrightEdge, and MarketMuse heavily rely on AI to offer SEO solutions like:

  • Keyword research
  • Search terms to make the content more relevant
  • Link-building opportunities
  • Trending topics
  • Optimum content length
  • User intent and more.

Tools like Alli AI can instantly optimize your website regardless of the CMS and your web development expertise. The platform performs a site-wide content and SEO audit, automatically optimizes the content, and resolves duplicate content issues. All this makes it easier for content creators to avoid poor-performing content and boost their online ranking.

5. Improve Page Speed

Google has put an exact value on fast user experience by including page speed as one of its ranking signals. That’s why boosting page speed is one of the top priorities for all businesses, especially ecommerce firms. As a result, Webmasters take all sorts of measures to improve page speed.

For instance, WordPress site owners may speed up WordPress by optimizing background processes, keeping the WP site updated, using a content delivery network (CDN), or using faster plugins. Of course, they also use various tools like Page Speed Insights, load time testers, and CMS plugins for the purpose. But now, there’s another ML-powered solution available for boosting the page speed – the Page Forecasting Model.

This model predicts user behaviour using machine learning and predicts the next page visitors will click on in real-time. This allows Webmasters to preload the page in the background, thus improving the overall experience.

The algorithm is trained with historical data from Google Analytics.

For instance, user patterns like going from home page to category page or product page to the shopping cart are recognized, understood, and included in update algorithms. If the user behaves similarly, the algorithm is automatically prepared with the next page.

However, the prediction accuracy is dependent on the amount of data available to train the algorithm and the website structure. So, the models will vary according to these factors. For instance, if yours is an ecommerce website that combines industry news with product pages, it’s better to use two or more models that can predict the behaviour per section.

6. Automate Website Analytics Process

Web analytics isn’t new. Businesses have been assessing user behaviour and tracking key performance metrics since the mid-’90s. But thanks to AI and machine learning, web analytics tools now have robust capabilities that allow businesses to automate the process. These tools can offer auto-generated reports and on-demand insights that feed marketing strategies.

Within a single visit to a webpage, each user generates hundreds of data points like the time spent on a page, the browser details, its location, and others. It is practically impossible to analyse all this data manually. AI and ML make such analysis faster and accurate by speeding up the data processing.

AI-based tools can help you track each visitor’s online behaviour, understand user journeys, and how customers move through the marketing funnel. They also point out issues, if any.

Let’s say you have a blog post that gets a lot of traffic, but visitors just read the post and leave without taking action like subscribing to your newsletter or sharing your post on social media. AI-based tools can flag such issues, allowing you to take the necessary corrective action like adding internal links or improving your CTA.

Google Analytics (insights section), Adobe Analytics, and Kissmetrics are among the top web analytics tools that help firms see patterns in customer behaviour and predict future trends.

7. Improve Site Navigation

Site navigation is another critical area in digital performance where AI and ML can help is site navigation. Though it may sound negligible, the importance of having organized and easy-to-follow navigation cannot be ignored. Well-planned navigation improves the visit duration, reduces the bounce rate, and boosts user experience. It also enhances the overall aesthetic appeal of the website design.

AI can help Webmasters create a user-friendly website structure that’s easy to navigate. AI-powered chatbots can guide users through the pages and help them find what they are looking for within the first few clicks. This significantly improves the user experience and sends good signals to search engines, indicating that your content is useful and relevant.

Thus, Google and other search engines will rank your page higher than any other website offering similar content.

8. Design Better Websites

AI applications can improve the usability and experience of a website by enhancing the site’s appearance, strengthening its search abilities, managing inventory better, and improving interaction with website visitors. No wonder a growing number of designers and developers are moving towards AI-based design practices.

AI is slowly becoming an indispensable part of modern web design and development. Take the field of artificial design intelligence (ADI) systems, for instance. ADI has triggered a sudden shift in the way web designing is done. It allows designers to combine applications into the website for better user experience and functionality.

Check out The Grid website platform that automatically adapts its design to highlight the content. The platform uses ML and constraint-based design and flow-based programming to dynamically adapt the website design to the content.

Today, we have several entrants in this space that are taking AI in web design to a whole new level. Brands like Adobe, Firedrop, Bookmark, Wix, Tailor Brands, and many others are leading the segment and leveraging the capabilities of AI in web design. In addition, most of these ADI platforms can learn and offer suggestions for optimizing the website for better user experience and SEO performance.

The Way Forward

Artificial intelligence and machine learning are proving to be awesome technologies when it comes to improving a firm’s digital performance. However, it is essential to remember that these ML models are only as good as the data that’s used to train them. Therefore, it’s critical to ensure that your marketing team has access to high-quality and accurate data.

So, before applying these technologies to your digital efforts, there are specific steps that you need to take.

  • Set up tags to track and capture on-site user behaviour.
  • House all the data from different sources in one central place like Google BigQuery, a Big Data analytics platform.
  • Invest in data deduplication to eliminate duplicate copies of repeating data from multiple sources.

Once your data is in place, you will be in a great position to start deploying AI and ML for boosting your digital performance. In addition, the information shared above will prove to be useful as you start building machine learning solutions for improving your business’s online presence.

By Anil Gupta

Anil is the CEO & Co-Founder of Multidots, one of the top WordPress development agencies on the planet. He is a technopreneur with over 13 years of experience coding, thinking, and leading the business with mind and people with heart. He and his team are seasoned in delivering secure and feature-reach WordPress services for businesses big and small.

Sourced from readwrite

 

 

By

How this innovation can be a competitive advantage for any business, including yours.

Demand for machine learning is skyrocketing. This growth is driven not only by “middle adopters” recognizing the vast potential of machine learning after watching early adopters benefit from its use, but by steady improvements in machine-learning itself. It may be too early to say with certainty that machine learning develops according to a predictable framework like Moore’s Law, the famous precept about power that has borne out for nearly 50 years and only recently began to show signs of strain. But the industry is clearly on a fast track.

As machine-learning algorithms grow smarter and more organizations come around to the idea of integrating this powerful technology into their processes, it’s high time your enterprise thought about putting machine learning to work, too.

First, consider the benefits and costs. It’s quite likely that your could leverage at least one of these five reasons to employ machine learning, whether it’s taming apparently infinite amounts of unstructured data or finally personalizing your .

1. Taming vast unstructured data with limited resources

One of the best-known use cases for machine learning is processing data sets too large for traditional data crunching methods to handle. This is increasingly important as data becomes easier to generate, collect and access, especially for smaller B2C enterprises that often deal with more transaction and customer data than they can manage with limited resources.

How you use machine learning to process and “tame” your data will depend on what you hope to get from that data. Do you want help making more informed product development decisions? To better market to your customers? To acquire new customers? To analyse internal processes that could be improved? Machine learning can help with all these problems and more.

2. Automating routine tasks

The original promise of machine learning was efficiency. Even as its uses have expanded beyond mere , this remains a core function and one of the most commercially viable use cases. Using machine learning to automate routine tasks, save time and manage resources more effectively has a very attractive paid of side effects for enterprises that do it effectively: reducing expenses and boosting net income.

The list of tasks that machine learning can automate is long. As with data processing, how you use machine learning for process automation will depend on which functions exert the greatest drag on your time and resources.

Need ideas? Machine learning has shown encouraging real-world outcomes when used to automate data classification, report generation, IT threat monitoring, loss and fraud prevention and internal auditing. But the possibilities are truly endless.

3. Improving marketing personalization and efficiency

Machine learning is a powerful force multiplier in marketing campaigns, enabling virtually endless messaging and buyer-profile permutations, unlocking the gate to fully personalized marketing without demanding an army of copywriters or publicity agents.

What’s especially encouraging for smaller businesses without much marketing expertise is that machine learning’s potential is baked into the top everyday digital-advertising platforms, namely Facebook and Google. You don’t have to train your own algorithms to use this technology in your next microtargeting campaign.

4. Addressing business trends

Machine learning has also proven its worth in detecting trends in large data sets. These trends are often too subtle for humans to tease out, or perhaps the data sets are simply too large for “dumb” programs to process effectively.

Whatever the reason for machine learning’s success in this space, the potential benefits are clear as day. For example, many small and midsize enterprises use machine learning technology to predict and reduce customer churn, looking for signs that customers are considering competitors and trigger retention processes with higher probabilities of success.

Elsewhere, companies of all sizes are getting more comfortable integrating machine learning into their hiring processes. By reinforcing existing biases in human-led hiring and promotion, earlier-generation algorithms did more harm than good, but newer models are able to counteract implicit bias and increase the chances of equitable outcomes.

5. Accelerating research cycles

A machine-learning unleashed in an R&D department is like an army of super-smart lab assistants. As more and more enterprises discover just what machine learning is capable of in and out of the lab, they’re feeling more confident about using it to eliminate some of the frustrating trial-and-error that lengthens research cycles and increases development costs. Machine learning won’t replace R&D experts anytime soon, but it does appear to empower them to use their time more effectively. More and better innovations could result.

If the experience of competitor businesses that have already deployed machine learning to great effect is any guide for your own experience, the answer to this question is a resounding yes.

The more interesting question is how you choose to make machine learning work for your businesses. This prompts another question, around what operational and structural changes your machine learning processes will bring. These changes, up to and including reducing headcounts in redundant roles or winding up entire lines of business, could be painful in the short run even as they strengthen your enterprise for the long haul.

Like all great innovations that increase operational efficiency and eliminate low-value work, machine learning does not benefit everyone equally. It’s up to the humans in charge of these algorithms to make the transition as orderly and painless as possible. It seems there are some things machine learning can’t yet do … yet.

Feature Image Credit: Yuichiro Chino | Getty Images 

By

Sourced from Entrepreneur Europe

By: RobOusbey

Last November, Moz VP Product, Rob Ousbey, gave a presentation at Web Con 2020 on the evolution of SEO, and we’re sharing it with you today! Rob draws on his years of research experience in the industry to discuss how SEO has changed, and what that means for your strategies.

Editor’s Note: Rob mentions a promo in the video that has since expired, but you can still get a free month of Moz Pro + free walkthrough here

Video Transcription

Hello, everyone. Thank you for that introduction. I very much appreciate it, and it’s wonderful to be with all of you here today. I’m Rob Ousbey from Moz.

Real quick, I was going to share my screen here and say that my gift to you for coming to the session today is this link. This won’t just get you a free month of Moz Pro, but everybody who signs up can get a free walkthrough with an SEO expert to help you get started. I’ll put this link up again at the end of the session. But if you’re interested in SEO or using a tool suite to help you, then Moz might be the toolset that can help.

Also, if you want to learn more about SEO, come join me on Twitter. I am @RobOusbey, and it would be wonderful to chat to you over there.

One reason I put my bio up here is because I’ve not been at Moz for all that long. I just started about a year ago. Before that, I was at Distilled, which is an international digital marketing agency, and I ran the Seattle office there for over a decade. I mention that because I want to share with you today examples of what I discovered when I was doing my client work. I want to share the research that my team members did when we were in your shoes.

A troubling story

So I wanted to kick off with an experience that stuck in my mind. Like I say, I’ve been doing this professionally for about 12 or 13 years, and back when I started, SEO was certainly more straightforward, if not getting easier.

People like my friend Rand Fishkin, the founder of Moz, used to do correlation studies that would discover what factors seem to correlate with rankings, and we’d publish these kinds of reports. This was the top ranking factors for 2005. And back then, they were broadly split between factors that assessed whether a page was relevant for a particular term and those that asked whether a site was authoritative. A lot of that relevance came from the use of keywords on a page, and the authority was judged by the number of links to the site. So we would help companies by doing good SEO. We’d put keywords on a page and build a bunch of links.

And I want to tell you a story about one of our clients. This is from just a couple of years ago, but it definitely stuck in my head. We were doing a lot of content creation for this client. We created some really informative pages and some really fun pages that would go viral and take over the Internet, and all of this earned them a lot of links. And this was the result of our efforts — a consistent, steady growth in the number of domains linking to that site. We had an incredible impact for them.

And here’s the graph of how many keywords they had when they ranked on the first page. This is fantastic. They ranked for a lot of keywords. And finally, here’s the graph of organic traffic to the site. Amazing.

But if you looked a little closer, you notice something that is a bit troubling. We never stopped acquiring links. In fact, a lot of the content we produced is so evergreen that even content built two or three years ago is still gathering new links every single week. But the number of keywords we have ranking in the top 10 went up and up and then stopped growing. And not surprisingly, the same trend is there in organic search traffic as well. What appears to have happened here is that we got strong enough to get on the front page with these keywords, to be a player in the industry, but after that, just building more links to the site didn’t help it rank for more keywords and it didn’t help it get any more search traffic.

SEO fundamentals

It seems like all the SEO fundamentals that we’ve learned about, keywords and links and technical SEO still apply and they’re still necessary to help you become a player in a particular industry. But after that, there are other factors that you need to focus on.

Now this evolution of SEO into new factors has been an accelerating process. My colleague at Moz, Dr. Pete Meyers has been tracking and collecting a lot of data about this. Last year, Google made close to 4,000 improvements to their results, and that’s the result of running something like 45,000 different experiments.

Pete has also been tracking how much the search results change every day. Blue is really stable results. Orange is a lot of changes. And so if you felt like your rankings for your site are getting more volatile than ever, you’re not wrong. When we hit 2017, we saw more changes to the results every day than we ever had before.

Now the way that Google’s algorithms used to be updated was by a bunch of people in a room making decisions. In fact, it was this bunch of people in this room. They decided what factors to dial up or down to create the best results.

Google’s goal: portal to the Internet

But what does this mean? What does it mean to make the best results? Well, we should think about what Google’s real goal is. They want to be your portal to the Internet. They want your web experience to begin with a Google search, and you’ll continue to do that if they make you satisfied with the results you see and the pages you click on. If they send you to the perfect web page for your query, that’s a satisfying experience that reflects well on Google. If they send you to page that’s a bad experience, it reflects poorly on them.

So it’s interesting to ask, “How would Google avoid doing that, and what would be a bad user experience?” Well, there are some obvious things, like if you arrive on a page that installs malware or a virus on your computer, or you arrive at a product page where everything is out of stock, or you go to a website that’s really slow or full of adverts. These are the pages Google does not want to include in their results.

And they’ve always been good at measuring these things pretty directly. More than 10 years ago they were testing how fast sites are and then using that to inform their rankings. If they spot malware or viruses on a site, they’ll temporarily remove it from the search results.

But they also tried more opinion-based measures. For a while, they were running surveys to ask people: Are you satisfied with these results? This was how they knew if their algorithm was working to get people what they wanted, to give them a good experience.

But the Google way of doing this is to try and do it at massive scale and hopefully to do it in the background, where users don’t have to answer a survey pop-up like this. And doing this in the background, doing it at huge scale has been more and more possible, firstly because of how much data Google has.

Click through rates

So I want to take a look at some of the kinds of things they might be looking at. Here’s an example of something they may want to do. Let’s consider the average click-through rate for every ranking position in the search results. Imagine that Google knows that 30% of people click on the first result and 22% click on number two and 5% click on number six and so on. They have a good understanding of these averages. But then for a particular keyword, let’s say they notice number six is getting 12% of the clicks. Something is going on there. What is happening? Well, whatever the reason why this is, Google could be better satisfying its users if that result was higher up in the rankings. Whoever is ranking at number six is what people want. Maybe they should rank higher.

“Pogo sticking”

Here’s another example. This is what we call pogo sticking. A user does a search and then clicks on a result, and then after a couple seconds looking at the page, they realize they don’t like it, so they click the back button and they select a different result. But let’s say they don’t like that one either, so they click back and they select a third result, and now they stay here and they use that site. Imagine a lot of people did the same thing. Well, if we were Google, when we saw this happening, it would be a pretty strong indicator that the third result is what’s actually satisfying users. That’s actually a good result for this query, and it probably deserves to be ranking much higher up.

User satisfaction: refinement

There’s even an extension of this where users pogo stick around the SERPs, and then they decide they can’t find anything to do with what they wanted. So they refine their search. They try typing something else, and then they find what they want on a different query. If too many people are not satisfied by any of the results on the first page, it’s probably a sign to make a pretty serious change to that SERP or to nudge people to do this other query instead.

Google’s evolution with Machine Learning

And doing this kind of huge analysis on a massive scale is something that was made much easier with the advent of machine learning. Now for a long time the folks in charge of the search results at Google were very reluctant to incorporate any machine learning into their work. It was something they did not want to do. But then Google appointed a new head of search, and they chose someone who had spent their career at Google promoting machine learning and its opportunities. So now they’ve moved towards doing that. In fact, Wired magazine described Google as remaking themselves as a “machine learning first” company.

What we’re seeing now

So this is where I want to move from my conjecture about what they could do into giving some examples and evidence of all of this for you. And I want to talk about two particular modern ranking factors that we have evidence for and that if you’re doing SEO or digital marketing or working on a website you can start considering today.

User signals

Firstly, I talked about the way that users interact with the results, what are they clicking on, how are they engaging with pages they find. So let’s dive into that.

A lot of this research comes from my former colleague, Tom Capper. We worked at Distilled together, but he’s also a Moz Associate, and a lot of this has been published on the Moz Blog.

User engagement

Let’s imagine you start on Google. You type in your query, and here’s the results. Here’s page one of results. Here’s page two of results. Not going to worry much about what happens after that because no one tends to click through further than page two.

Now let’s think about how much data Google has about the way people interact with those search results. On the front page, they see lots going on. There are lots of clicks. They can see patterns. They can see trends. They can see what people spend time on or what they pogo stick back from. On the second page and beyond, there’s very little user engagement happening. No one is going there, so there’s not many clicks and not much data that Google can use.

So when we look at what factors seem to correlate with rankings, here’s what we see. On page two, there is some correlation between the number of links a site has and where it ranks. That’s kind of what we expected. That’s what SEOs have been preaching for the last decade or more. But when we get to the bottom of page one, there’s a weaker correlation with links. And at the top of page 1, there’s almost no correlation between the number of links you have and the position you rank in.

Now we do see that the folks on page one have more links than the sites on page two. You do need the SEO basics to get you ranking on the first page in the first place. We talk about this as the consideration set. Google will consider you for the first page of results if you have good enough SEO and if you have enough links.

But what we can take away from this is that when all that user data exists, when Google know where you’re clicking, how people are engaging with sites, they will use those user metrics as a ranking factor. And then in situations where there isn’t much user data, the rankings might be more determined by link metrics, and that’s why deeper in the results we see links being a more highly correlated factor.

In a similar way, we can look at the whole keyword space, from the very popular head terms in green to the long tail terms in red that are very rarely searched for. Head terms have a lot of people searching for them, so Google has a lot of user data to make an assessment about where people are clicking. For long tail terms, they might only get a couple of searches every month, they just don’t have that much data.

And again, what we see is that the popular, competitive terms, where there’s lots of searching happening, Google seems to be giving better rankings to sites with better engagement. For long tail terms, where they don’t have that data, the rankings are more based on link strength. And there have been plenty of studies that bear this out.

Larry Kim found a relationship between high click-through rates and better rankings. Brian Dean found a relationship between more engagement with a page and better rankings. And Searchmetrics found that time on site correlated with rankings better than any on-page factor.

Contemporary SEO

And even though Google keeps a tight lid on this, they won’t admit to exactly what they’re doing, and they don’t describe their algorithms in detail, there are occasionally insights that we get to see.

A couple of years ago, journalists from CNBC had the chance to sit in on a Google meeting where they were discussing changes to the algorithm. One interesting part of this article was when Googlers talked about the things they were optimizing for when they were designing a new feature on the results page. They were looking at this new type of result they’d added, and they were testing how many people clicked on it but then bounced back to the results, which they considered a bad sign. So this idea of pogo sticking came up once again.

If that was something that they were monitoring in the SERPs, we should be able to see examples of it. We should be able to see the sites where people pogo stick don’t do so well in SEO, which is why I’m always interested when I find a page that has, for whatever reason, it has a bad experience.

User metrics as a ranking factor

So here’s a site that lists movie trivia for any movie you might be interested in. It’s so full of ads and pop-ups that you can barely see any of the content on the page. It’s completely overrun with adverts. So if my hypothesis was correct, we’d see this site losing search visibility, and in fact that’s exactly what happened to them. Since their peak in 2014, the search visibility for the site has gone down and down and down.

Here’s another example. This is a weird search. It’s for a particular chemical that you buy if you were making face creams and lotions and that kind of thing. So let’s have a look at some of the results here. I think this first result is the manufacturer’s page with information about the chemical. The second is an industrial chemical research site. It has all the data sheets, all the safety sheets on it. The third is a site where you can buy the chemical itself.

And then here’s another result from a marketplace site. I’ve blurred out their name because I don’t want to be unfair to them. But when you click through on the result, this is what you get, an immediate blocker. It’s asking you to either log in or register, and there’s no way I want to complete this form. I’m going to hit the back button right away. Google had listed nine other pages that I’m going to look at before I even consider handing over all my data and creating an account here.

Now if my theory is right, as soon as they put this registration wall up, visitors would have started bouncing. Google would have noticed, and their search visibility would have suffered.

And that’s exactly what we see. This was a fast-growing startup, getting lots of press coverage, earning lots of links. But their search traffic responded very poorly and very quickly once that registration wall was in place. The bottom graph is organic traffic, and it just drops precipitously.

Here’s my final example of this, Forbes. It’s a 100-year-old publishing brand. They’ve been online for over 20 years. And when you land on a page, this is the kind of thing you see for an article. Now I don’t begrudge advertising on a page. They need to make some money. And there’s only one banner ad here. I was actually pleasantly surprised by that.

But I’m baffled by their decision to include a video documentary in the corner about a totally different topic. Like I came to read this article and you gave me this unrelated video.

And then suddenly this slides into view to make absolutely sure that I didn’t miss the other ad that it had in the sidebar. And then the video, that I didn’t want any way about an unrelated topic, starts playing a pre-roll ad. Meanwhile their browser alert thing pops up, and then the video — about the unrelated topic that I didn’t want in the first place — starts playing. So I’m trying to read and I scroll away from all this clutter on the page. But then the video — about an unrelated topic that I didn’t want in the first place — pins itself down here and follows me down the page. What is going on? And then there’s more sidebar ads for good measure.

And I want to say that if my theory is right, people will be bouncing away from Forbes. People will avoid clicking on Forbes in the first place, and they will be losing search traffic. But I also know that they are a powerhouse. So let’s have a look at what the data said.

I grabbed their link profile, and people will not stop linking to Forbes. They’re earning links from 700 new domains every single day. This is unstoppable. But here’s their organic search visibility. Forbes is down 35% year-on-year. I think this is pretty validating.

At this point, I’m confident saying that Google has too much data about how people engage with the search results and with websites for you to ignore this. If your site is a bad experience, why would Google let you in the top results to begin with and why would they keep you there?

What can you do?

So what can you do about this? Where can you start? Well, you can go to Google Search Console and take a look through the click-through rates for your pages when they appear in search. And in your analytics package, GA or whatever else, you can see the bounce rate for visitors landing on your pages, particularly those coming from search. So look for themes, look for trends. Find out if there are pages or sections of your site that people don’t like clicking on when they appear in the results. Find out if there are pages that when people land on them, they bounce right away. Either of those are bad signs and it could be letting you down in the results.

You can also qualitatively take a critical look at your site or get a third party or someone else to do this. Think about the experience that people have when they arrive. Are there too many adverts? Is there a frustrating registration wall? These things can hurt you, and they might need a closer look.

Brand signals

Okay, so we talked about those user signals. But the other area I want to look at is what I talk about as brand signals. Brand can apply to a company or a person. And when I think about the idea being a brand, I think about how well-known the company is and how well-liked they are. These are some questions that signal you have a strong brand, that people have heard of you, people are looking for you, people would recommend you.

And this second one sounds like something SEOs know how to research. When we say people are looking for you, it sounds like we’re just talking about search volume. How many times every month are people typing your brand name into Google?

Again, my colleague, Tom Capper did some research about this that’s published on the Moz Blog. He looked at this problem and said, “Okay. Well, then let’s see if the number of people searching for a brand has any correlation to how well they rank.” And then there’s a load of math and a long story that led to this conclusion, that branded search volume did correlate with rankings. This is in blue. In fact, it correlated more strongly with rankings than Domain Authority does, so that’s the measure that shows you the link strength of a website.

So think about this. We’ve worried about links for two decades, but actually something around brand strength and maybe branded search volume seems to correlate better.

For data geeks, here’s a way of using the R-squared calculation to answer the question, “How much does this explain the rankings?” Again, what you need to know here is that branded search volume explained more of the rankings than anything else.

So we’ve been preaching about this for a while, and then literally two days ago I saw this tweet. A team in the UK was asking about controversial SEO opinions. And the SEO manager for Ticketmaster came out and said this. He believes that when Google sees people searching for your brand name alongside a query, they start ranking you higher for the non-branded terms. And I don’t think this is controversial. And in fact, one of the replies to this was from Rand Fishkin, the founder of Moz. He also now believes that the brand signals are more powerful than what links and keywords can do.

What can you do?

So what can you do about this? Well, first you have to realize that any investment you make in brand building, whether that’s through PR activities or through like traditional advertising, is good business to do anyway. But it now has twice the value because of its impact on SEO, because those activities will get people looking for you, following you, sharing your brand. If you work for a billion-dollar company, you should make sure that your SEO and PR teams are well-connected and well-aligned and talking together. If you don’t work for a billion-dollar company, I’ve got two small, interesting examples for you.

Example: AdaFruit

First I want to call out this site, AdaFruit.com. They sell electronic components. There are many, many sites on the web that sell similar products. Not only do they have great product pages with good quality images and helpful descriptions, but I can also look at a product like this and then I can click through to get ideas for things I can build with it. This is some LED lights that you can chain together. And here’s an idea for a paper craft glowing crystal you can build with them. Here’s the wiring diagram I’d need for that project plus some code I can use to make it more interactive. It’s only an $8 product, but I know that this site will make it easy for me to get started and to get value from making this purchase.

They go even further and have a pretty impressive AdaFruit channel on YouTube. They’ve got 350,000 subscribers. Here’s the videos, for instance, that they publish every week walking you through all the new products that they’ve recently added to the site.

The CEO does a hands-on demo telling you about everything they have in stock. And then they have other collections of videos, like their women in hardware series that reaches an audience that’s been typically underserved in this space.

AdaFruit made a significant investment in content for their own channels, and it paid off with some brand authority, but brand trust and brand engagement as well.

Example: Investor Junkie

But I want to show you one other example here from arguably a much less exciting industry and someone who couldn’t invest so much in content. This is InvestorJunkie.com, a site that does reviews of financial services and products. And when I was working at the agency, we worked with this site and specifically with its founder, Larry. Larry was an expert in personal finance and particularly in personal investments. And this was his solo project. He blogged on the site and used his expertise. But as the site grew, he hired some contractors as well as our agency, and they created a lot of great content for the site, which really helped with SEO. But to make a significant impact on brand strength, we had to get the word out in front of loads of people who didn’t already know about him.

So we took Larry’s expertise and we offered him as a guest to podcasts, a lot of podcasts, and they loved having him on as a guest. Suddenly Larry was able to provide his expertise to huge new audiences, and he was able to get the Investor Junkie brand and their message in front of lots of people who had never heard of the site before.

But better still, this had a compounding effect, because people who are interested in these topics typically don’t just subscribe to one of these podcasts. They subscribe to a bunch of them. And so if they hear about Larry and Investor Junkie once, they might never think about it again. But if he shows up in their feed two or three or four times over the course of a few months, they’ll start to form a new association with the brand, maybe trusting him more, maybe seeking out the site.

And as an aside, there’s one other thing I love about podcasts, which is that if you’re creating a blog post, that can take hours and hours of work. If you’re creating a conference presentation, it can take days or weeks of work. If you’re a guest on a 30-minute podcast, it literally takes you about 30 minutes. You log on, you talk to a host, and then your part of the work is done.

So this can get you in front of a new audience. It gets people looking for you, which Google will notice. But it has even more SEO value as well, because every podcast typically has a page like this with show notes. It’s a page that Google can index, a page that Google can understand. And Google can see the signals of trust. It can see your brand being mentioned. It can see the links back to your site as well. I obviously can’t speak highly enough of podcasts for PR, for brand awareness, and even for SEO.

Did this help Larry and the Investor Junkie team? Yeah. This obviously wasn’t the extent of their SEO strategy. But everything they did contributed to them getting great rankings for a variety of competitive terms, and it helped them rank up against much bigger sites with much bigger teams and much bigger budgets. And that story actually came to an end just about two years ago, because the site was finally acquired for $6 million, which is not bad for a solo founder who was just busy building his own brand.

In summary

All right. I’ll wrap up with some of these thoughts. Google has been evolving. They’ve now been able to collect so much more data about the way people interact with the search results and other pages, and they’re now using machine learning to process all of that so they can better assess: Are we giving people a good user experience? Are the sites that we’re ranking the ones that satisfy people’s queries? The game of SEO has changed.

Now when you’re starting out, all the basics still apply. Come to Moz, read the Beginner’s Guide, do great technical SEO, do great keyword research, do great link building. Those are still necessary to be considered to become a player in your industry to help get you near the first page for any terms you want to target.

But when you’re trying to move up the front page, when you’re trying to establish yourself much further and become a much bigger brand, we’re not seeing a lot of correlation between things like links and getting into the very top rankings for any particular term. Instead, think about the good game that Google is playing. They want to make sure that when someone clicks on a result, they stay there. They don’t want to see this pogo sticking. They don’t want to see the link and the title that people want to click on sitting down at number six. So target their KPIs. Think about how you can help Google by making sure that your results are the ones people want to click on. Make sure that when people click on your results, that’s the page that they stay on.

But ultimately, you will never lose out if you improve your brand authority and engagement with your content. These are just good things to do for business. A stronger brand, content, and a website that people want to spend time on is hugely important and pays dividends. But now it’s all doubly important because it also has this massive impact on your SEO.

Video transcription by Speechpad.

By: RobOusbey

Sourced from MOZ