Tag

Predictive Analytics

Browsing

By Kathy Leake

Regardless of the business model, forecasting is extremely important for businesses as it creates some insurance for future business outcomes.

In today’s world, businesses have a wealth of data at their fingertips. However, data will be of no use to a business if it is not utilized to gain insights and make informed decisions to enhance business operations. Business intelligence, or BI, helps businesses achieve this goal. BI is a technology-driven way of analysing information and delivering actionable insights that can help managers, executives and end users gain detailed insights that aid them in making decisions. It helps people to assist in making decisions on what they can do for getting insights.

While traditional BI tools primarily monitor historical data and current data, predictive analytics utilizes data, statistical algorithms, data mining methodologies, analytical techniques and machine learning algorithms to determine the likely outcomes based on historical data to provide insights into the future. In addition to being able to determine what has happened in the past and why it happened, predictive analytics also helps you understand what could happen in the future. By identifying opportunities, they allow businesses to be proactive and agile.

For businesses, predictive analysis is crucial. Digital transformations and increased competition have made companies more competitive than ever before. Using predictive analysis is like having a strategic vision of the future, mapping out the opportunities and threats. Therefore, companies should look for predictive models that:

  • identify potential opportunities
  • plan and optimize marketing strategies
  • map consumer behaviour
  • enhance efficiency and improve operations
  • identify and reduce risks

Any industry can use predictive analytics to forecast sales, detect risks, and improve sales operations. Predictive analytics can also be used to detect fraud, evaluate credit risk or find new investment opportunities in a financial institution. Using predictive analytics, manufacturers can identify factors that result in quality reduction, production failures, and distribution risks.

With predictive analytics, sales forecasting can create real value for businesses. Many other business decisions are influenced by accurate sales forecasts. However, sales forecasting is still a time-consuming activity for sales professionals who often rely on Excel spreadsheets and other tools that do not provide sufficient analytics and insights to make accurate sales forecasts. As a result of advanced predictive analytics, sales professionals can automate rolling forecasts and have more transparency and smarter decision support.

Predictive Analytics: How Does It Work?

Using an ensemble of machine learning algorithms, AI-based forecasting optimizes forecasts. Depending on which business metric you’re forecasting, the system selects a model that’s uniquely suitable. The process consists of a series of steps:

  1. Determine the purpose of the forecast.
  2. Identify the items for the forecast.
  3. Choose a forecast model type.
  4. Obtain and analyse the data needed for the model.
  5. Make the forecast.
  6. Analyse, verify, and implement the results.

Regardless of the business model, forecasting is extremely important for businesses as it creates some insurance for future business outcomes. In addition to detecting and mitigating potential issues in advance, it helps organizations make informed decisions and set budgets and business goals. AI helps businesses oversee all these aspects with increased accuracy in the forecasting process.

Feature Image Credit: stock.adobe.com

By Kathy Leake

Sourced from Newsweek

 

Artificial Intelligence (AI) mimics the cognitive functions of the human mind, particularly in learning and problem-solving. Many of the apps that we use today are powered by AI. From voice-activated virtual assistants to e-commerce, AI applications are everywhere.

With the advancements in AI technology and access to big data, companies across different industries are integrating AI into their processes to find solutions to complex business problems.

The application of AI is most noticeable within the retail and e-commerce space. Websites and apps can interact intelligently with customers, creating a personalized approach that enhances the customer experience.

No matter what industry your business operates in, these seven tips can help you acquire and retain customers more efficiently at a fraction of the time it takes to do things manually.

How to Use AI to Get and Keep Customers

1. Identify Gaps in Your Content Marketing Strategy

If you’re just starting with content marketing, you’ll need to know what type of content to create.

By using AI, you can identify the gaps, find fixes, and evaluate the performance of your content marketing campaign.

Take Packlane, a company that specializes in custom package designs, for example. They came up with high-quality content like helpful blog posts that provide valuable information. At the same time, the content they publish makes it easier for their target market to understand their brand and services.

If you’re in the retail or e-commerce space, you can use AI to identify the gaps in your content marketing. Your content may be focused on your products and their features, but through AI, you can determine the relevant content that addresses your audience’s needs and pain points.

2. Pre-Qualify Prospects and Leads

Not every visitor to your site will become a paying customer. If you’re not getting sales despite the massive traffic, it means you’re generating low-quality leads.

Some reasons why this happens includes:

  • Targeting the wrong audience
  • Poor content marketing strategy
  • Using the wrong type of signup form
  • Promoting in the wrong social media platforms
  • Ineffective calls to action

These explain why 80% of new leads never convert into sales. The mistakes can be rectified with the help of artificial intelligence.

AI tools can extract relevant data to help you learn more about your target audience. These tools also provide predictive analytics on your customers’ behaviour. They, in turn, help improve your lead generation strategy because you’ll know which leads to pursue, where to find them, and how to effectively engage them.

3. Provide Personal Recommendations

According to a report by the Harvard Business Review, even though there are privacy concerns when consumers’ personal information changes hands, people still value personalized marketing experiences.

Brands that tailor their recommendations based on consumer data boost their sales by 10% over brands that don’t.

Recommendation systems’ algorithms typically rely on data on browsing history, pages visited, and previous purchases. But AI is so advanced that it can analyse customers’ interactions with the site content and find relevant products that will interest the individual customer. This way, AI makes it easier to target potential customers and effectively puts the best products in front of the site visitors.

Because of AI, recommendation engines are able to filter and customize the product recommendations based on each customer’s preferences. It’s a cycle of collecting, storing, analysing, and filtering the available data until it matches the customers’ preferences.

This is an effective way of acquiring and retaining customers because there’s an element of personalization.

4. Reduce Cart Abandonment

A high cart abandonment rate is the bane of e-commerce business owners. According to a study by the Baymard Institute, online shopping cart abandonment rate is close to 70%.

Users abandon their online carts for various reasons:

  • high extra costs
  • complicated checkout process
  • privacy concerns
  • not enough payment methods, or
  • they’re not ready to buy yet.

Using AI-powered chatbots is one way to reduce cart abandonment. AI chatbots can guide the customers through their shopping journey.

AI chatbots can have a conversational approach and give the customer a nudge to prompt them to complete the purchase. These chatbots can also act as a virtual shopping assistant or concierge that can let a customer know about an on-the-spot discount, a time-sensitive deal, a free shipping coupon, or any other incentives that will encourage them to complete the checkout.

With AI, lost orders due to cart abandonment are recoverable and can lead to an increase in conversion rate for e-commerce businesses.

5. Increase Repurchases With Predictive Analytics

Predictive analytics is the process of making predictions based on historical data using data mining, statistical modelling, artificial intelligence, machine learning, and other techniques. It can generate insights, forecast trends, and predict behaviours based on past and current data.

In marketing, predictive analytics can be used to predict customers’ propensity to repurchase products as well as its frequency. When used to optimize marketing campaigns, AI-powered predictive analytics can generate customer response, increase repurchase, and promote cross-selling of relevant products.

It’s all part of the hyper personalized marketing approach, where brands interact and engage with customers and improve their experience by anticipating their needs and exceeding their expectations.

With predictive analytics, you can focus your marketing resources on customer retention and targeting a highly motivated segment of your market that are more than happy to return and repurchase your products. This approach is less expensive than advertising or implementing pay-per-click campaigns.

6. Improve Your Website User Experience

Every business—big or small—knows the importance of having a website, where visitors can interact with the brand, respond to a call to action, or purchase products. But it’s not enough to just have an online presence; it’s important that visitors to the site have a great experience while navigating through your site.

What makes for a great user experience? Users have different expectations. Some of them want faster loading time, while others want a simple and intuitive interface. But most important of all, they want to find what they’re looking for. It could be a product, content, or a solution to a problem. Whatever they may be, it’s up to you to meet their expectations.

With artificial intelligence, you can improve your website user experience tenfold. Here are some of the ways AI can be used to improve user experience.

Search relevance

This pertains to how accurate the search results are in relation to the search query.  The more relevant the results are, the better search experience the users will have. This means they are likely to find relevant content answering their queries or finding products that solve their problems.

Personalized recommendations

Content that is tailor-made for the user tends to have greater engagement which increases the likelihood of conversation. Amazon has perfected the product recommendation system using advanced AI and machine learning. AI gets data from customers and uses it to gain insights and apply predictive analysis to recommend relevant products for cross-selling opportunities.

AI chatbots

The presence of chatbots contributes to a great user experience because they provide 24/7 assistance and support in the absence of human customer service.  Users can get accurate answers to their inquiries quickly and efficiently, compared to scrolling through a text-based FAQs.

7. Social Listening for Potential Customers

Social listening is the process of analysing the conversations, trends, and buzz surrounding your brand across different social media platforms. It’s the next step to monitoring and tracking the social media mentions of your brand and products, hashtags, industry trends, as well as your competitors.

Social listening analyses what’s behind the metrics and the numbers. It determines the social media sentiment about your brand and everything that relates to it. It helps you understand how people feel about your brand. All the data and information you get through social listening can be used to guide you in your strategy to gain new customers.

Social media monitoring and listening can be done much more efficiently with the help of artificial intelligence. It’s an enormous task for a team of human beings to monitor and analyse data, but with AI-powered social media tools, all the tedious tasks can be automated. They can be trained to leverage data to provide valuable insights about your brand with high accuracy.

With AI and machine learning, your social listening can easily determine your audience, brand sentiments, shopping behaviour, and other important insights. By having this information within reach, you’ll know how you can connect with them more effectively and turn them from prospects to paying customers.

Key Takeaways/Conclusion

More companies across different industries are using the power of artificial intelligence and machine learning to significantly increase brand awareness, enhance customer engagement, improve user experience, and meet customer expectations.

  • AI can identify gaps in your content marketing strategy so that you can create content that’s relevant to your target audience.
  • AI can help you generate high-quality leads that are likely to buy your products.
  • With AI, you can personalize and tailor-fit your product recommendations based on your customers’ preferences, increasing repeat purchases.
  • AI can be integrated into your e-commerce site to reduce shopping cart abandonment.
  • AI significantly improves website user experience by making it intuitive, accessible, and easy to navigate.
  • AI-powered social media tools can help you monitor and gain valuable insights about your brand. You can then use this to develop a social media marketing strategy to gain new customers.

Achieve these milestones, and you’ll be sure to acquire new customers and retain existing ones.

Feature Image Credit: iStock/monsitj

Sourced from Black Enterprise

 

By Heather Fletcher

Marketers should first determine why they’re optimizing ads at all

Everyone loves a sure thing, especially someone who’s paying for an ad.

In a world where many business leaders waste money on ads by going with their gut feelings or worse—collecting as much data as possible on ad audiences and inevitably targeting them with irrelevant advertisements—what are the best ways for performance marketers to use predictive analytics to optimize ads?

Performance marketers need to start by figuring out why they’re optimizing their ads at all.

Do they want to increase sales? Acquire customers? Accomplish some other goal? This seemingly simple step is where a lot of performance marketers go wrong. They collect all of the data they can instead of the data they should.

Imagine if Karen Heath from Teradata hadn’t wanted to help a retailer increase diaper sales in 1992. She may never have sought out the data showing that when men bought the high-margin item, they also bought beer. By placing beer and diapers together, the retailer’s sales rose.

This one finding for one retailer that resulted in product placement changes in 1992 eventually evolved into the more advanced predictive analytics that optimized multichannel marketing in 2011.

Now, in 2020, the practice is so advanced the basic definition of predictive analytics says it incorporates machine learning techniques.

Predictive analytics practitioner Helen Xiaoqin Yi, a data scientist at a major electronics retailer, suggested performance marketers use “predictive tools to create audience segments or explore new potential audiences” with algorithms like SVM, logistic regression or neural networks.

“Then we can analyse their preferences from the comments, reviews, social media, interactions with ads, events or any relevant campaigns, and design several plans for different segments,” Yi said.

Stephen H. Yu, president and chief consultant at Willow Data Strategy, advised that performance marketers figure out who should be targeted with which ad and through which channel before personalizing ads.

“A series of personas based on propensity modelling can be useful in determining the most optimal offer and creative for each target,” Yu said.

Devyani Sadh, CEO and chief data officer of Data Square, provided three possible segments:

  • Prospects: Identify top-performing prospect ad audience segments based on demographic and psychographic similarities, content preferences, and interests of known high-value customer “clones.”
  • Active customers: Model customers’ prior history along with a “similarity index” of others with similar purchase patterns to optimize ad content. Examples include cross-sell or the next logical product (concurrent or sequential)
  • At-risk or lapsed customers: Stage 1 is identifying those who are staged to attrition or already lapsed, but are likely to respond to an offer. Stage 2 is optimizing ad messaging for retargeting or other initiatives by predicting special offers and promotions most likely to resonate with this group, based on history.

Yi suggested launching a small test of several ad designs using different times of day, durations of exposure and placements in order to prove that the optimization worked.

Then, give credit where credit is due. Yu added that performance marketers need to make note of how well each element and channel worked. In other words, don’t default to blanket attribution.

For example, performance marketers will want to keep track of more than just ad placement. Within this one area alone, Sadh said performance marketers can “optimize ad placement by ranking top-performing platforms, affiliates, social media sites, websites, search engines and regions by extrapolating from navigation patterns, search, browsing behaviour and digital identities of known converters.”

But even a sure thing won’t be a sure thing forever. Just like how search engines regularly update algorithms, performance marketers will need to revisit the predictive analytics process to continue optimizing their ads. Sometimes, marketers will have to start from scratch.

Yi said program or campaign dashboards will tell performance marketers if they need to update the models they build based on the process.

“No matter what the results are, we should always summarize and learn from them to prepare for the next campaigns,” she said.

Feature Image Credit: Predictive tools can help brands form audience segments and gain new consumers. iStock

By Heather Fletcher

Sourced from ADWEEK

By  

For those who wished they could Google anything and figure out what direction to take for their business, this company provides a solution.

IT Prof. Alex Pentland and  Director of the MIT Media Lab Entrepreneurship Program, named by Forbes as  “one of the world’s 7 most powerful data scientists,” developed a new paradigm for Machine Learning based on AI.

Instead of building a data model for each predictive question, it uses a new social theory of human behavior that predicts future choices through behavioral commonalities. With the interplay of social and the kind of cause of effect interaction associated with physics, naturally enough, he called it Social Physics.

RELATED: INCEPTION: SCIENTISTS HAVE SUCCESSFULLY IMPLANTED AN Artifical MEMORY

What his research demonstrated was that people behave in mathematically predictable ways. Just like physics determines the state of the natural universe, Social Physics governs the human universe.

But this was not just academic knowledge for Pentland.

Pentland was well-versed in real-life business dealings. He served as a founding member of the Scientific Advisory Boards at Google, AT&T, Nissan, and the UN Secretary General for cutting edge technology. On that basis of combining knowledge, skills, experience, and innovation, he created an automated engine that can answer any natural-language question.

That is the cornerstone of the business Pentland cofounded in 2014: Endor. Endor extended Social Physics using proprietary technology into a powerful engine that is able to explain and predict human behavior.

More AI-enabled prediction to arrive at a Google-like capability

Endor enables the automation and democratization of Al and data science, allowing a company to advance from paying a lot for getting the answers to just a limited number of predictive questions each year to affordable and easy access to unlimited answers.

It addresses the problem that has hampered businesses that do not have the deep pockets to fund the data science teams that had been necessary to become truly data-driven and capitalize on the power of predictive analytics. That meant that the power of predicting the future was only within reach of tech-giants who could afford to invest millions of dollars in building their data science resources.

Smaller companies that wanted to be able to direct their business strategy on the basis of predictions had to work off slow, complex, and expensive machine learning solutions. But now the are able the automated AI predictions provided by Endor, which MIT dubbed  the “Google for predictive analytics.”

The same MIT article quotes the other Endor co-founder and its CEO, Dr. Yaniv Altshuler, reinforcing the Google comparison:

“It’s just like Google. You don’t have to spend time thinking, ‘Am I going to spend time asking Google this question?’ You just Google it.”

Altshuler declared, “It’s as simple as that.”

Altshuler has his own list of impressive credentials. He is a recognized expert on Machine Learning, Swarm Intelligence and Data Analysis who has authored over 60 scientific papers and filed 15 patents.

He expressed the great potential in Endor as follows:

“Imagine if you can ask any predictive business question such as ‘Who will complete a transaction tomorrow?’ or ‘Who will upgrade to Premium services in the next week?’ — this is a gamechanger for businesses and enterprises who want to act on their data in a speedy and accurate manner.”

Althshuler is featured in the video below in a conversation with Charles Hoskinson, Senior member of Endor’s Advisory Board about the future of Predictive Analytics:

What it can do for businesses

Endor delivers faster response times,  as no data scientist input, including modeling, coding or data gathering, is called for. It embeds actionable insights into an organization’s workflow by allowing it BI, sales, marketing and all business teams to self-find predictions ‘do-it-yourself’ style

Now Endor makes accurate predictions scalable and accessible to businesses of all sizes (Enterprise to SMB) through proprietary Social Physics technology developed through years of research at MIT (Not through NLP). It enables business users to ask predictive questions, and get automated accurate predictions without having to hire data scientists.

It is particularly convenient for those without data scientist resources to prepare the data.  Endor is agnostic about its use of big data. Even if has not been prepared through cleaning it can be analyzed.

Plus Endor has the industry-first capability to compute on encrypted data without decrypting it. That means that it meets the standards set for global privacy and data security regulations, which should be a major relief for businesses that have to deal with European entities and prove themselves to be GDPR compliant.

Since its founding in 2014, Endor has successfully grown an impressive clientele, including national banks, large Financial Services, and Fortune 500 companies, such as Coca Cola and MasterCard.

Endor is a pioneer in the merging of legacy infrastructure with innovative Blockchain services, thus supporting the transition of its large, Fortune 500 customers, enterprise customers to the Endorprotocol.  The convergence of platforms will ensure a larger pool of aggregate data (new data sources), to the Endor Protocol, which in turn, will work to further increase the accuracy of its predictions.

Above is the HubCulture interview with both  Pentland and Altshuler,

Beyond commercial applications

While it is primarily marketed to businesses, including wholesalers, retailers, and financial institutions, Endor’s technology can also be applied to other goals, including that of national security. MIT reported that it used its analytics for analyzing terrorist threats on the basis of Twitter data:

“Endor was given 15 million data points containing examples of 50 Twitter accounts of identified ISIS activists, based on identifiers in the metadata. From that, they asked the startup to detect 74 with identifiers extremely well hidden in the metadata.”

It only took an Endor employee 24 minutes to identify  80 “lookalike” ISIS accounts, more than half of which were in the pool of 74 well-hidden accounts named by the agency. The efficiency of the system is not just manifested in the relatively short time it took to do the analysis but also in the very low false-positive rate.

What’s in a name?

Endor at World Economic Forum, Davos 2019 – Dr. Yaniv Altshuler Co-Founder & CEO of Endor from Endor on Vimeo.

As the video above clarifies, the company’s name comes from Star Wars. Fans may recall Endor as the home of the cute and furry, pint-sized beings who help the rebels against the Empire forces that went there to build the second Death Star in Return of the Jedi.

Here’s a clip to remind you of the scene at Endor.

The thing is that the name Endor was not actually born out of George Lucas’ imagination. It actually first appears in the Bible in the 28th chapter of the Book of Samuel. That is the account of the witch of Endor whom the king calls on for divination.

In the Bible’s account, King Saul requests that she summon the now-dead prophet Samuel to instruct him on what to do. So really the name Endor is more appropriate for predictive technology because of its original context than for the more geeky-cool Star Wars connection.

 

Feature Image Credit: kentoh/iStock

By  

Sourced from Interesting Engineering

Sourced from Dimensionless

The Next Generation of Data Science

Quite literally, I am stunned.

I have just completed my survey of data (from articles, blogs, white papers, university websites, curated tech websites, and research papers all available online) about predictive analytics.

And I have a reason to believe that we are standing on the brink of a revolution that will transform everything we know about data science and predictive analytics.

But before we go there, you need to know: why the hype about predictive analytics? What is predictive analytics?

Let’s cover that first.

 Importance of Predictive Analytics

 

Black Samsung Tablet Computer

By PhotoMix Ltd

 

According to Wikipedia:

Predictive analytics is an area of statistics that deals with extracting information from data and using it to predict trends and behavior patterns. The enhancement of predictive web analytics calculates statistical probabilities of future events online. Predictive analytics statistical techniques include data modeling, machine learning, AI, deep learning algorithms and data mining.

Predictive analytics is why every business wants data scientists. Analytics is not just about answering questions, it is also about finding the right questions to answer. The applications for this field are many, nearly every human endeavor can be listed in the excerpt from Wikipedia that follows listing the applications of predictive analytics:

From Wikipedia:

Predictive analytics is used in actuarial science, marketing, financial services, insurance, telecommunications, retail, travel, mobility, healthcare, child protection, pharmaceuticals, capacity planning, social networking, and a multitude of numerous other fields ranging from the military to online shopping websites, Internet of Things (IoT), and advertising.

In a very real sense, predictive analytics means applying data science models to given scenarios that forecast or generate a score of the likelihood of an event occurring. The data generated today is so voluminous that experts estimate that less than 1% is actually used for analysis, optimization, and prediction. In the case of Big Data, that estimate falls to 0.01% or less.

Common Example Use-Cases of Predictive Analytics

 

Components of Predictive Analytics

 

A skilled data scientist can utilize the prediction scores to optimize and improve the profit margin of a business or a company by a massive amount. For example:

  • If you buy a book for children on the Amazon website, the website identifies that you have an interest in that author and that genre and shows you more books similar to the one you just browsed or purchased.
  • YouTube also has a very similar algorithm behind its video suggestions when you view a particular video. The site identifies (or rather, the analytics algorithms running on the site identifies) more videos that you would enjoy watching based upon what you are watching now. In ML, this is called a recommender system.
  • Netflix is another famous example where recommender systems play a massive role in the suggestions for ‘shows you may like’ section, and the recommendations are well-known for their accuracy in most cases
  • Google AdWords (text ads at the top of every Google Search) that are displayed is another example of a machine learning algorithm whose usage can be classified under predictive analytics.
  • Departmental stores often optimize products so that common groups are easy to find. For example, the fresh fruits and vegetables would be close to the health foods supplements and diet control foods that weight-watchers commonly use. Coffee/tea/milk and biscuits/rusks make another possible grouping. You might think this is trivial, but department stores have recorded up to 20% increase in sales when such optimal grouping and placement was performed – again, through a form of analytics.
  • Bank loans and home loans are often approved with the credit scores of a customer. How is that calculated? An expert system of rules, classification, and extrapolation of existing patterns – you guessed it – using predictive analytics.
  • Allocating budgets in a company to maximize the total profit in the upcoming year is predictive analytics. This is simple at a startup, but imagine the situation in a company like Google, with thousands of departments and employees, all clamoring for funding. Predictive Analytics is the way to go in this case as well.
  • IoT (Internet of Things) smart devices are one of the most promising applications of predictive analytics. It will not be too long before the sensor data from aircraft parts use predictive analytics to tell its operators that it has a high likelihood of failure. Ditto for cars, refrigerators, military equipment, military infrastructure and aircraft, anything that uses IoT (which is nearly every embedded processing device available in the 21st century).
  • Fraud detection, malware detection, hacker intrusion detection, cryptocurrency hacking, and cryptocurrency theft are all ideal use cases for predictive analytics. In this case, the ML system detects anomalous behavior on an interface used by the hackers and cybercriminals to identify when a theft or a fraud is taking place, has taken place, or will take place in the future. Obviously, this is a dream come true for law enforcement agencies.

So now you know what predictive analytics is and what it can do. Now let’s come to the revolutionary new technology.

Meet Endor – The ‘Social Physics’ Phenomenon

 

Image result for endor image free to use

End-to-End Predictive Analytics Product – for non-tech users!

 

In a remarkable first, a research team at MIT, USA have created a new science called social physics, or sociophysics. Now, much about this field is deliberately kept highly confidential, because of its massive disruptive power as far as data science is concerned, especially predictive analytics. The only requirement of this science is that the system being modeled has to be a human-interaction based environment. To keep the discussion simple, we shall explain the entire system in points.

  • All systems in which human beings are involved follow scientific laws.
  • These laws have been identified, verified experimentally and derived scientifically.
  • Bylaws we mean equations, such as (just an example) Newton’s second law: F = m.a (Force equals mass times acceleration)
  • These equations establish laws of invariance – that are the same regardless of which human-interaction system is being modeled.
  • Hence the term social physics – like Maxwell’s laws of electromagnetism or Newton’s theory of gravitation, these laws are a new discovery that are universal as long as the agents interacting in the system are humans.
  • The invariance and universality of these laws have two important consequences:
    1. The need for large amounts of data disappears – Because of the laws, many of the predictive capacities of the model can be obtained with a minimal amount of data. Hence small companies now have the power to use analytics that was mostly used by the FAMGA (Facebook, Amazon, Microsoft, Google, Apple) set of companies since they were the only ones with the money to maintain Big Data warehouses and data lakes.
    2. There is no need for data cleaning. Since the model being used is canonical, it is independent of data problems like outliers, missing data, nonsense data, unavailable data, and data corruption. This is due to the orthogonality of the model ( a Knowledge Sphere) being constructed and the data available.
  • Performance is superior to deep learning, Google TensorFlow, Python, R, Julia, PyTorch, and scikit-learn. Consistently, the model has outscored the latter models in Kaggle competitions, without any data pre-processing or data preparation and cleansing!
  • Data being orthogonal to interpretation and manipulation means that encrypted data can be used as-is. There is no need to decrypt encrypted data to perform a data science task or experiment. This is significant because the independence of the model functioning even for encrypted data opens the door to blockchain technology and blockchain data to be used in standard data science tasks. Furthermore, this allows hashing techniques to be used to hide confidential data and perform the data mining task without any knowledge of what the data indicates.

Are You Serious?

Image result for OMG image

That’s a valid question given these claims! And that is why I recommend everyone who has the slightest or smallest interest in data science to visit and completely read and explore the following links:

  1. https://www.endor.com
  2. https://www.endor.com/white-paper
  3. http://socialphysics.media.mit.edu/
  4. https://en.wikipedia.org/wiki/Social_physics

Now when I say completely read, I mean completely read. Visit every section and read every bit of text that is available on the three sites above. You will soon understand why this is such a revolutionary idea.

  1. https://ssir.org/book_reviews/entry/going_with_the_idea_flow#
  2. https://www.datanami.com/2014/05/21/social-physics-harnesses-big-data-predict-human-behavior/

These links above are articles about the social physics book and about the science of sociophysics in general.

For more details, please visit the following articles on Medium. These further document Endor.coin, a cryptocurrency built around the idea of sharing data with the public and getting paid for using the system and usage of your data. Preferably, read all, if busy, at least read Article No, 1.

  1. https://medium.com/endor/ama-session-with-prof-alex-sandy-pentland
  2. https://medium.com/endor/endor-token-distribution
  3. https://medium.com/endor/https-medium-com-endor-paradigm-shift-ai-predictive-analytics
  4. https://medium.com/endor/unleash-the-power-of-your-data

Operation of the Endor System

Upon every data set, the first action performed by the Endor Analytics Platform is clustering, also popularly known as automatic classification. Endor constructs what is known as a Knowledge Sphere, a canonical representation of the data set which can be constructed even with 10% of the data volume needed for the same project when deep learning was used.

Creation of the Knowledge Sphere takes 1-4 hours for a billion records dataset (which is pretty standard these days).

Now an explanation of the mathematics behind social physics is beyond our scope, but I will include the change in the data science process when the Endor platform was compared to a deep learning system built to solve the same problem the traditional way (with a 6-figure salary expert data scientist).

An edited excerpt from https://www.endor.com/white-paper:

From Appendix A: Social Physics Explained, Section 3.1, pages 28-34 (some material not included):

Prediction Demonstration using the Endor System:

Data:
The data that was used in this example originated from a retail financial investment platform
and contained the entire investment transactions of members of an investment community.
The data was anonymized and made public for research purposes at MIT (the data can be
shared upon request).

 

Summary of the dataset:
– 7 days of data
– 3,719,023 rows
– 178,266 unique users

 

Automatic Clusters Extraction:
Upon first analysis of the data the Endor system detects and extracts “behavioral clusters” – groups of
users whose data dynamics violates the mathematical invariances of the Social Physics. These clusters
are based on all the columns of the data, but is limited only to the last 7 days – as this is the data that
was provided to the system as input.

 

Behavioural Clusters Summary

Number of clusters:268,218
Clusters sizes: 62 (Mean), 15 (Median), 52508 (Max), 5 (Min)
Clusters per user:164 (Mean), 118 (Median), 703 (Max), 2 (Min)
Users in clusters: 102,770 out of the 178,266 users
Records per user: 6 (Median), 33 (Mean): applies only to users in clusters

 

Prediction Queries
The following prediction queries were defined:
1. New users to become “whales”: users who joined in the last 2 weeks that will generate at least
$500 in commission in the next 90 days
2. Reducing activity : users who were active in the last week that will reduce activity by 50% in the
next 30 days (but will not churn, and will still continue trading)
3. Churn in “whales”: currently active “whales” (as defined by their activity during the last 90 days),
who were active in the past week, to become inactive for the next 30 days
4. Will trade in Apple share for the first time: users who had never invested in Apple share, and
would buy it for the first time in the coming 30 days

 

Knowledge Sphere Manifestation of Queries
It is again important to note that the definition of the search queries is completely orthogonal to the
extraction of behavioral clusters and the generation of the Knowledge Sphere, which was done
independently of the queries definition.

Therefore, it is interesting to analyze the manifestation of the queries in the clusters detected by the system: Do the clusters contain information that is relevant to the definition of the queries, despite the fact that:

1. The clusters were extracted in a fully automatic way, using no semantic information about the
data, and –

2. The queries were defined after the clusters were extracted, and did not affect this process.

This analysis is done by measuring the number of clusters that contain a very high concentration of
“samples”; In other words, by looking for clusters that contain “many more examples than statistically
expected”.

A high number of such clusters (provided that it is significantly higher than the amount
received when randomly sampling the same population) proves the ability of this process to extract
valuable relevant semantic insights in a fully automatic way.

 

Comparison to Google TensorFlow

In this section a comparison between prediction process of the Endor system and Google’s
TensorFlow is presented. It is important to note that TensorFlow, like any other Deep Learning library,
faces some difficulties when dealing with data similar to the one under discussion:

1. An extremely uneven distribution of the number of records per user requires some canonization
of the data, which in turn requires:

2. Some manual work, done by an individual who has at least some understanding of data
science.

3. Some understanding of the semantics of the data, that requires an investment of time, as
well as access to the owner or provider of the data

4. A single-class classification, using an extremely uneven distribution of positive vs. negative
samples, tends to lead to the overfitting of the results and require some non-trivial maneuvering.

This again necessitates the involvement of an expert in Deep Learning (unlike the Endor system
which can be used by Business, Product or Marketing experts, with no perquisites in Machine
Learning or Data Science).

 

Traditional Methods

An expert in Deep Learning spent 2 weeks crafting a solution that would be based
on TensorFlow and has sufficient expertise to be able to handle the data. The solution that was created
used the following auxiliary techniques:

1.Trimming the data sequence to 200 records per customer, and padding the streams for users
who have less than 200 records with neutral records.

2.Creating 200 training sets, each having 1,000 customers (50% known positive labels, 50%
unknown) and then using these training sets to train the model.

3.Using sequence classification (RNN with 128 LSTMs) with 2 output neurons (positive,
negative), with the overall result being the difference between the scores of the two.

Observations (all statistics available in the white paper – and it’s stunning)

1.Endor outperforms Tensor Flow in 3 out of 4 queries, and results in the same accuracy in the 4th
.
2.The superiority of Endor is increasingly evident as the task becomes “more difficult” – focusing on
the top-100 rather than the top-500.

3.There is a clear distinction between “less dynamic queries” (becoming a whale, churn, reduce
activity” – for which static signals should likely be easier to detect) than the “Who will trade in
Apple for the first time” query, which are (a) more dynamic, and (b) have a very low baseline, such
that for the latter, Endor is 10x times more accurate!

4.As previously mentioned – the Tensor Flow results illustrated here employ 2 weeks of manual
improvements done by a Deep Learning expert, whereas the Endor results are 100% automatic and the entire prediction process in Endor took 4 hours.

Clearly, the path going forward for predictive analytics and data science is Endor, Endor, and Endor again!

Predictions for the Future

Personally, one thing has me sold – the robustness of the Endor system to handle noise and missing data. Earlier, this was the biggest bane of the data scientist in most companies (when data engineers are not available). 90% of the time of a professional data scientist would go into data cleaning and data preprocessing since our ML models were acutely sensitive to noise. This is the first solution that has eliminated this ‘grunt’ level work from data science completely.

The second prediction: the Endor system works upon principles of human interaction dynamics. My intuition tells me that data collected at random has its own dynamical systems that appear clearly to experts in complexity theory. I am completely certain that just as this tool developed a prediction tool with human society dynamical laws, data collected in general has its own laws of invariance. And the first person to identify these laws and build another Endor-style platform on them will be at the top of the data science pyramid – the alpha unicorn.

Final prediction – democratizing data science means that now data scientists are not required to have six-figure salaries. The success of the Endor platform means that anyone can perform advanced data science without resorting to TensorFlow, Python, R, Anaconda, etc. This platform will completely disrupt the entire data science technological sector. The first people to master it and build upon it to formalize the rules of invariance in the case of general data dynamics will for sure make a killing.

It is an exciting time to be a data science researcher!

Data Science is a broad field and it would require quite a few things to learn to master all these skills.

Dimensionless has several resources to get started with.

Sourced from Dimensionless

By 

Although I’m not certain that they all were “Robert Wendland originals,” my late father had many witticisms to which I credit him. With impeccable timing, a simple, pithy phrase would be spoken that was not only appropriate for the moment but also stuck with me for a lifetime. One in particular that I reference often suggests that “You can’t drive forward if you spend your time only looking in the rear-view mirror.”

The rate of change occurring in our lives and across virtually every industry is unprecedented. Oddly, the smartphone was first introduced less than a decade-and-a-half ago, and yet its ubiquity makes it unfathomable to think of going through life without one. The same could be said of self-check-in at airports. (Remember the days of paper tickets, travel agents and full-service?) And, in retail, where our company Hamacher Resource Group works across the retail supply chain, everything I remember from my childhood seems like ancient history.

I am personally fascinated by predictive analytics and the ability to combine data elements to create an approximate idea of what the future may hold. Although data modeling has been used quite extensively in certain industries (e.g., weather predictions based on specific historic models and key indicators, insurance estimates and actuarial tables, etc.), the concept of applying similar science to retail holds unrealized potential. Given technology advancements and data-mining tools, potential future predictions — in other words, answering the question “What lies ahead?” — seems far more attainable than ever before. Considering big data already captured within the retail sector, it becomes mind-boggling.

Virtually all industries have pools of information that could be used for predictive purposes. This reminds me of a presentation I sat through nearly three decades ago by a then-IBM executive who suggested that the future will be owned by those who have the keys to data and the intelligence that it generates. And, as recently as this month’s National Retail Federation event, the power of data was repeatedly emphasized.

According to a recent blog post from McKinsey, “winning decisions are increasingly driven by analytics more than instinct, experience, or merchant ‘art’; what succeeded in the past is now a poor predictor of the future, and analytics is helping to inform and unlock new pockets of growth.”

So, what types of data do we have available in the retail class of trade? Here’s a partial list:

 Point-of-sale transaction-level data

 Retail pricing strategies

 Consumer-based loyalty information (shopper insights)

 Physical store size and demographic characteristics

 Department sizes and product assortment (planogram data)

 Store navigation intelligence (traffic flow)

 Social media metrics

 Competitive intelligence

 Seasonality and other external trends (e.g., weather)

If one takes the time to imagine connecting discrete data elements and begins creatively combining them in unique ways, amazing predictions can be formed. Whether informing promotional campaigns, personalizing customer offers, improving employee training, building customer retention or simply forecasting demand and optimizing assortment and stocking levels, using big data and some gut instinct can conjure up interesting insights. Here are a couple on my mind.

Imagine aligning store navigation intelligence, shopper loyalty data and neighborhood demographics to consider how to attract others within the area by enhancing navigation and department placement to match their needs. Once again, continuing to do the same thing time and time again may not be positioning the retailer to best capture additional sales.

As an example, say a grocery store was initially designed to attract stay-at-home moms with two kids. The store was arranged to cater to her needs, but the neighborhood was now comprised of older adults with limited mobility and fixed incomes. How could the data predict their navigation pattern, category preferences and better cater to their overall shopping occasion? The hypothesis I would look to prove is whether shelves should be lower and aisles wider, whether certain categories (e.g., sugary cereals and baby diapers) should be downsized and how to rearrange the checkout to be less confined and more staff-centric. Predictive analytics could be used to model the potential result of such changes and allow the retailer to assess whether such an investment would be justified by the return.

Another scenario fueled by predictive analytics could look at the success ratio of certain product launches within a retail operation. Examining planogram and assortment data alongside point-of-sale transaction details and customer loyalty intelligence, future predictions can be created to fuel decisions about placement, promotion and timing.

Today a delicate balance exists between staid and proven stock keeping units (SKUs) and potential innovators and disrupters. Developing analytic modeling that can better predict performance could greatly improve buying decisions and bolster the performance of the category. This would certainly help in honing the process of new item evaluation and potentially reduce the number of new products shelved that do not perform to the expectations of the retailer.

In essence, only looking in the rear-view mirror and continually employing the same process will merely produce the same results. On the other hand, using learnings from the past and combining new, enriched data elements could generate a true breakthrough that drives new results.

Feature Image Credit: Getty

By 

Vice President, Strategic Relations at Hamacher Resource Group, Inc., passionate about optimizing results across the retail supply chain.

Sourced from Forbes

By Matthew Kelleher

We focused our efforts on seeing whether using Predictive Analytics combined with AI driven marketing automation can help improve the customer experience around the key stages of the customer lifecycle – prospect’s first purchase, second purchase, multi-purchase, VIP and churn. Our strategy was to improve marketing performance at each of these stages by using Predictive Analytics to understand where each customer is on their own journey.  When the brand understands the customer’s next likely action, they can specifically target those individuals with more effective comms, ultimately, driving up total customer lifetime value.

Results at each stage of the lifecycle have been excellent. For instance, one brand saw an increase of 83.5% in second purchase rate. This, and other case studies, can be found here. Anyone who attended my presentation at either Technology for Marketing or Festival of Marketing recently, would have seen me present the outcome of the longer analysis to see if they could improve Customer Lifetime Value. For those of you who could not attend, you will have to wait for the release of the new case studies to the website in the next couple of weeks.

The obligatory Q&A session followed my presentations at both these events. But to be honest, I always find these questions instructive and rather good fun. Too often, and I’m not alone in this, I get carried away with what I want to say, and questions illustrate key elements that I’ve missed! So, these were the six questions that were asked (although I must admit I thought there were more) with a few more thoughts than I had time to give on the day.

  1. How has GDPR affected your data gathering? How did you fight an increase (if any) in unsubscribed customers?

    Whilst it has felt like forever, the period since May 25th is still, in the grand scheme of things, relatively short! Our impression is that, in general (can you see me caveating this response very heavily!) the long-term impact on sign ups and consent is relatively little. However, for some organisations their ‘re-permissioning’ experiences have been fairly disastrous. For instance, a database of active contacts of 500,000 reduced to 6,500 (if you are in this group then you are not alone). It’s not the objective of this blog to cast aspersions on the quality of advice given to some organisations, all I can really say is that without the correct permissions, processing data for comms or even for Predictive Analytics is not possible. There are minimum amounts of data required to make Predictive Analytics work, so for many organisations with smaller databases Predictive Analytics may not work and the issues surrounding GDPR only serve to increase that group.

  2. Do you have an example of using Predictive Analytics for recruitment initiatives – getting new customers rather than increasing the value of current customers?

    RedEye has not worked with any organisations to develop models around acquisition. However, our whole strategy is built around recent prospect/customer behaviour as the key driver for predicting their next likely action. Marketers can better understand how an individual prospect or customer is behaving in relation to their brand. By tracking as many interactions, across as wide a number of channels as possible, this can then be compared with the typical behaviour of customers who have completed certain journeys. And this is applicable to many different market sectors.

  3. What were the actions that came out of the predictive model to reduce churn. How were they implemented?

    25 minutes is a very short amount of time to pack in a lot of things. One that I often leave off the list is a detailed description of the treatments employed at each of the stages. But there is a very specific reason for this… the platform RedEye has developed provides the data to the marketer, and it is up to the marketer to then leverage this information. They know their brand and customers better than anyone else. A review of the treatments used by Travis Perkins would be a completely different presentation. Every brand will develop specific treatments and the insight of what Travis Perkins did is therefore of less relevance when we’re looking at how the system was plugged together to provide the outcome. I often say ‘if you knew a specific customer was likely to never buy from you again – what would you want to say to them?’. Every marketer would have a specific answer to this, I am sure!

  4. How did you link website behaviours to an individual? Was it logged in users only?

    At FoM I briefly shot off an answer, which was that we utilise a tag management solution, which was a bit blasé. The RedEye solution has always been built around a personalisation capability centred on the value of an individual’s browsing behaviour, which is also at the core of our approach to Predictive Analytics as described above. We then link this to channel engagement information, transactional data and any other type of data a client has that has a personal identifier of any kind. It is this data that is at the core of the CDP function and therefore the bedrock of Predictive Analytics. With regards to the issue of ‘logged in’, no, the customer or prospect does not need to be logged in, they just have to have given their consent.

  5. Did any of your clients face major hurdles in pulling together all the data from siloed and legacy data pots? If so, how was this overcome?

    I would say that the vast majority of organisations that RedEye work with have internal hurdles with regards to data silos. Some clients who want to input more data find they are restricted by internal systems, and there is very little that RedEye can do to overcome these bottlenecks. But assuming that the data is available somewhere in an organisation, the CDP is there to help marketers resolve these issues. We try to make this work more effectively in two ways. Firstly, we create easier ways to format data into the system, using simple connectors to input (and export) data. And secondly, we offer support staff to help this happen for clients who are resource strapped.

  6. Which is the best CDP you would recommend for publishers?

    If I remember this question from the day it was asked by Nish! Well Nish, as an executive of RedEye I would say get in touch with us! But being a bit more professional, and having asked my colleagues on the Customer Data Platform Institute I would recommend BlueConic and Lytics who I’m informed have good experience working with publishers.

If anyone else has any other questions I would be delighted to do my best to answer them, get in contact with me here.

By Matthew Kelleher

Sourced from Digital Doughnut

By Kevin Blackwell 

When the concept of buyer intent was first introduced, the mathematics behind it were fairly basic. Practitioners would choose an available stream of web hits, then determine which accounts’ traffic indicated a propensity for interest in a category of product or service. Occasionally, a divide would normalize the account’s size, or a multiply would magnify a signal of poignance. But while the process was strong in data, it was weak in Data Science. It took time to be able to leverage true Data Science to account for a multi-stage buying journey and the massive quantity of buying intent signals available in today’s advanced buyer intent models.

Predictive Analytics, often misunderstood as an alternative to buyer intent, commonly had the exact opposite problem. While affording powerful logistical regression models based on any data that it could see, predictive analytics models suffered from near data blindness, as marketing departments struggled to feed anything more than CRM, website logs, and basic firmographic data to it. By definition, they could only predict what a prospect would do after they had entered the sales funnel, since little information prior to that point was available. Strong in Data Science; weak in data.

Today’s buyer intent analytics have come a long way. Stronger than ever in data, Best-in-Class solutions leverage sophisticated Data Science models that consider differential analysis to model the swings in buyer intent signals that naturally occur as a B2B buying committee progresses through meetings, proof-of-concepts, vendor sorting, and other stages of a buying journey. Logistical regressions optimize media mix models based upon advertising response rates by accounts showing buyer intent. Multifactor predictive models identify fraudulent ad responses that are then eliminated from buyer intent scores. And big data architectures bring cutting-edge machine learning and Natural Language Processing (NLP) algorithms to bear in identifying buyer intent signals.
What do you need to do to prepare for the rapidly-changing mathematics behind buyer intent?
  • Make sure buyer intent partners have the Data Science capabilities necessary to model the results you are looking for, but can also relate their findings to non-technical audiences.
  • Build and buy buyer intent capabilities using a wide-focus model that includes statistically significant sample sizes. Focusing on just a few web properties won’t provide enough data to sort true signals from noise.
  • Strengthen your team’s statistical understanding so that they can accurately interpret results from vendors and maximize your ROI. You don’t need a data scientist on staff, but a deeper understanding will help.

By Kevin Blackwell 

Sourced from Business 2 Community

By Kevin Blackwell    

When the concept of buyer intent was first introduced, the mathematics behind it were fairly basic. Practitioners would choose an available stream of web hits, then determine which accounts’ traffic indicated a propensity for interest in a category of product or service. Occasionally, a divide would normalize the account’s size, or a multiply would magnify a signal of poignance. But while the process was strong in data, it was weak in Data Science. It took time to be able to leverage true Data Science to account for a multi-stage buying journey and the massive quantity of buying intent signals available in today’s advanced buyer intent models.

Predictive Analytics, often misunderstood as an alternative to buyer intent, commonly had the exact opposite problem. While affording powerful logistical regression models based on any data that it could see, predictive analytics models suffered from near data blindness, as marketing departments struggled to feed anything more than CRM, website logs, and basic firmographic data to it. By definition, they could only predict what a prospect would do after they had entered the sales funnel, since little information prior to that point was available. Strong in Data Science; weak in data.

  • Make sure buyer intent partners have the Data Science capabilities necessary to model the results you are looking for, but can also relate their findings to non-technical audiences.
  • Build and buy buyer intent capabilities using a wide-focus model that includes statistically significant sample sizes. Focusing on just a few web properties won’t provide enough data to sort true signals from noise.
  • Strengthen your team’s statistical understanding so that they can accurately interpret results from vendors and maximize your ROI. You don’t need a data scientist on staff, but a deeper understanding will help.

Do you know which specific companies are currently in-market to buy your product?

Wouldn’t it be easier to sell to them if you already knew who they were, what they thought of you, and what they thought of your competitors?

Good news – It is now possible to know this, with up to 91% accuracy. Check out Aberdeen’s comprehensive report Demystifying B2B Purchase Intent Data to learn more.

By Kevin Blackwell    

Sourced from Business 2 Community

By Mark Bowen

Dave Russell, Vice President for Product Strategy at Veeam, outlines five intelligent data management needs CIOs need to know about in 2019.

The world of today has changed drastically due to data. Every process, whether an external client interaction or internal employee task, leaves a trail of data. Human and machine generated data is growing 10 times faster than traditional business data, and machine data is growing at 50 times that of traditional business data.

With the way we consume and interact with data changing daily, the number of innovations to enhance business agility and operational efficiency are also plentiful. In this environment, it is vital for enterprises to understand the demand for Intelligent Data Management in order to stay one step ahead and deliver enhanced services to their customers.

I’ve highlighted five hot trends in 2019 decision-makers need to know – keeping the Europe, Middle East and Africa (EMEA) market in mind, here are my views:

  1. Multi-Cloud usage and exploitation will rise

With companies operating across borders and the reliance on technology growing more prominent than ever, an expansion in multi-cloud usage is almost inevitable. IDC estimates that customers will spend US$554 billion on cloud computing and related services in 2021, more than double the level of 2016.

On-premises data and applications will not become obsolete, but that the deployment models for your data will expand with an increasing mix of on-prem, SaaS, IaaS, managed clouds and private clouds.

Over time, we expect more of the workload to shift off-premises, but this transition will take place over years, and we believe that it is important to be ready to meet this new reality today.

  1. Flash memory supply shortages, and prices, will improve in 2019

According to a report by Gartner in October this year, flash memory supply is expected to revert to a modest shortage in mid-2019, with prices expected to stabilise largely due to the ramping of Chinese memory production.

Greater supply and improved pricing will result in greater use of flash deployment in the operational recovery tier, which typically hosts the most recent 14 days of backup and replica data. We see this greater flash capacity leading to broader usage of instant mounting of backed up machine images (or copy data management).

Systems that offer copy data management capability will be able to deliver value beyond availability, along with better business outcomes. Example use cases for leveraging backup and replica data include DevOps, DevSecOps and DevTest, patch testing, analytics and reporting.

  1. Predictive analytics will become mainstream and ubiquitous

The predictive analytics market is forecast to reach $12.41 billion by 2022, marking a 272% increase from 2017, at a CAGR of 22.1%.

Predictive analytics based on telemetry data, essentially Machine Learning (ML) driven guidance and recommendations is one of the categories that is most likely to become mainstream and ubiquitous.

Machine Learning predictions are not new, but we will begin to see them utilising signatures and fingerprints, containing best practice configurations and policies, to allow the business to get more value out of the infrastructure that you have deployed and are responsible for.

Predictive analytics, or diagnostics, will assist us in ensuring continuous operations, while reducing the administrative burden of keeping systems optimised. This capability becomes vitally important as IT organisations are required to manage an increasingly diverse environment, with more data, and with more stringent service level objectives.

As predictive analytics become more mainstream, SLAs and SLOs are rising and businesses’ SLEs, Service Level Expectations, are even higher. This means that we need more assistance, more intelligence in order to deliver on what the business expects from us.

  1. The ‘versatalist’ (or generalist) role will increasingly become the new operating model for the majority of IT organisations.

While the first two trends were technology-focused, the future of digital is still analogue: it’s people. Talent shortages combined with new, collapsing on-premises infrastructure and public cloud + SaaS, are leading to broader technicians with background in a wide variety of disciplines, and increasingly a greater business awareness as well.

Standardisation, orchestration and automation are contributing factors that will accelerate this, as more capable systems allow for administrators to take a more horizontal view rather than a deep specialisation.

Specialisation will of course remain important but as IT becomes more and more fundamental to business outcomes, it stands to reason that IT talent will likewise need to understand the wider business and add value across many IT domains.

Yet, while we see these trends challenging the status quo next year, some things will not change. There are always constants in the world, and we see two major factors that will remain top-of-mind for companies everywhere….

  1. Frustration with legacy backup approaches and solutions

The top three vendors in the market continue to lose market share in 2019. In fact, the largest provider in the market has been losing share for 10 years. Companies are moving away from legacy providers and embracing more agile, dynamic, disruptive vendors, such as Veeam, to offer the capabilities that are needed to thrive in the data-driven age.

  1. The pain points of the Three Cs: Cost, complexity and capability 

These Three Cs continue to be why people in data centres are unhappy with solutions from other vendors. Broadly speaking, these are excessive costs, unnecessary complexity and a lack of capability, which manifests as speed of backup, speed of restoration or instant mounting to a virtual machine image. These three major criteria will continue to dominate the reasons why organisations augment or fully replace their backup solution.

  1. The arrival of the first 5G networks will create new opportunities for resellers and CSPs to help collect, manage, store and process the higher volumes of data

In early 2019 we will witness the first 5G-enabled handsets hitting the market at CES in the US and MWC in Barcelona. I believe 5G will likely be most quickly adopted by businesses for Machine-to-Machine communication and Internet of Things (IoT) technology. Consumer mobile network speeds have reached a point where they are probably as fast as most of us need with 4G.

2019 will be more about the technology becoming fully standardised and tested, and future-proofing devices to ensure they can work with the technology when it becomes more widely available, and EMEA becomes a truly Gigabit Society.

For resellers and cloud service providers, excitement will centre on the arrival of new revenue opportunities leveraging 5G or infrastructure to support it. Processing these higher volumes of data in real-time, at a faster speed, new hardware and device requirements, and new applications for managing data will all present opportunities and will help facilitate conversations around edge computing.

Feature Image: Dave Russell, Vice President for Product Strategy at Veeam

By Mark Bowen

Sourced from INTELLIGENT CIO