Author

editor

Browsing

By Rodney Laws

Budgeting is a key concern for every business, regardless of its size or nature. “What of giant companies with remarkable cash reserves?”, you might ask. Well, they only have such resources because they budget so efficiently. Everyone faces the same balancing act: on one hand, trying to invest in the future, and on the other hand, trying to keep expenses down.

This certainly applies to SEO, because it’s something that can be pursued indefinitely. There’s no such thing as a perfect page. Even if there were, a simple regular update to the ranking algorithm would once again leave it short of perfection. You need to know when to stop your efforts and move on — and for that, you need to know what you stand to gain by doing more.

Any page on your website that you want indexed could deserve SEO investment, depending on what you’ve already done. Due to this, you must carry out some analysis to determine where to focus your attention. In this post, we’re going to identify some tips that can help you figure out which pages on your website are most worthy of search optimization. Let’s begin.

Look for straightforward technical issues

When something isn’t working the way you want it to and there are numerous potential causes, you should check the simplest ones first. This isn’t just because they’re far more common. It’s also because they’re relatively easy to resolve. In the SEO world, there are basic technical issues that can completely ruin rankings, so look out for those.

The most obvious example involves discovering that a page you meant to index has been rejecting crawlers. No crawlers means no rankings. Another example is slow loading. Maybe the page that’s doing the worst in the rankings is loading several times more slowly than others on your site, causing it to be penalised. Google prefers speedy websites for obvious reasons.

PageSpeed Insights is great for finding slow pages.

Line up these basic issues and make a priority of fixing them as soon as possible. If there’s one page riddled with them, that should be your focus until they’re all resolved. You’ll never get more impactful results in SEO (and more ROI) than when you fix simple technical issues: making progress after that gets a lot trickier.

You may be able to improve performance sufficiently through tweaking, but it’s possible that you’ll need an infrastructure upgrade. If you created your site through a free or low-cost website builder and invested minimally in hosting, for instance, it’s time to scale things up. This doesn’t mean you need to give up the freedom of open source ecommerce, though. There are managed hosting providers (such as Cloud ways) that give users the freedom to choose which systems they use and make whatever modifications suit their goals.

Shoring up your technical foundation and achieving strong reliability doesn’t mean you can stop, though. SEO fully justifies consistent budgeting, and should remain a priority for years to come. It’s just a matter of ordering. Get the easy wins first, and face the tougher challenges later.

Consider what a page may one day become

Next, you should think not only about what a page is for now but also what it might be in the future. High-value pages should adapt to visitor preferences and SEO standards, but you may also want to expand them to do more with the visits they receive.

Take something like a roundup page listing the best pizza places in Winnipeg. If it got enough traffic, you could make the content more detailed to include more places. This would allow you to add more affiliate links and make some more money. It would also make the page feel more credible and authoritative.

You could even start linking out to other pieces of content like the best curry places in Winnipeg or the best pizza ovens for making pizza at home. Searches like that are extremely actionable, and valuable as a result. It could eventually become one of your most important pages, driving a lot of affiliate revenue and earning you myriad links.

This kind of affiliate page can be very lucrative.

If you can look at a page on your site and see that kind of potential, start investing in SEO for it immediately. That way, it will be ranking relatively well when you start broadening the content. Conversely, if you see a page that’s never going to be more complex than it already is, then is there much value in trying to get it ranking better?

Carefully review your page analytics

The problem that a lot of people have with reviewing analytics is that they expect arduous work. They imagine poring through spreadsheets and tables for hours on end. What’s more, they suspect that it might never meaningfully pay off. Why go through such strain if that’s the case?

Thankfully, that isn’t an accurate conception of what this process involves. This is due to the proliferation of convenient analytics tools: even Google Analytics is reasonably intuitive when you get to grips with it. Using such a tool, you can easily follow all the stats that matter. The result? Enhanced data aggregation and a drastic reduction in time spent on manual review. That’s a huge victory.

When you follow your on-page analytics, pay close attention to any metrics that represent general success for you. Look at things like average time on site and number of unique visits. This will help you understand which pages are performing well and which aren’t. As a result, you’ll find it easier to allocate resources effectively.

Keep in mind that you’ll need to factor in the broader context when you’re dealing with pages that aren’t destination pages (or at least aren’t solely destination pages). Category pages and homepages are good examples. A homepage should be a good landing page, set up to rank well for relevant keywords, but it must do something with that ranking success: pass it on to more actionable pages (namely product pages).

For each page, root through all the links leading to or away from it, and think carefully about what role it plays in your website overall, as well as what role it could play if improved. Remember that there must always be a purpose beyond promotion (boasting about your brand might feel good, but it won’t earn traffic, nor will it convince people to invest in you).

Use competitors’ websites for comparison

Let’s say that one of your pages — not one of your most important, but not insignificant — is fairly mediocre when it comes to SEO. Does that mean you should invest in it? Well, before you decide, do some searches for relevant terms and find rival pages to analyse. How do they compare to yours? Are they better? Worse? Similar?

Beyond that, take a close look at what those pages offer because that’s extremely important here. Imagine that your website sells hats, and you have a page that sells top hats specifically. If it isn’t ranking very highly but none of the pages outranking it sell anything, that lowers the need to invest in the page.

You should still work on getting the ranking up at some point, but it isn’t particularly urgent. Anyone who wants to order a top hat will probably make their way down to your site at some point if it’s the highest-ranking sales page. Consequently, you won’t lose any customers to other stores. Only some thought leadership and brand credibility through not being top.

If your page were ranking more highly in general but under another store page, that would be a bigger reason to invest in SEO. A searcher who just wanted to go ahead and buy might just order through the first viable store result. It’s all about understanding context. What do you stand to gain or lose by putting time into SEO?

Conclusion

SEO is a vital part of running a business with an online presence (which should be almost every business these days). Unfortunately, it’s complicated and costly to carry out thoroughly. To work more efficiently, you need to pick out the pages that most warrant that level of investment.

Try these tips to give you a strong indication of where to spend. You should be able to come up with a shortlist of pages that you can optimise without breaking the bank. The result? Stronger rankings, lower spend, and less work. Good luck.

Feature Image credit: Pixabay

By Rodney Laws

Editor at Ecommerce Platforms 

By

Here’s the true pros and cons of VPNs

WASHINGTON, D.C. — Most consumer VPN services overpromise what they can deliver and exaggerate their own usefulness, two security researchers said at the ShmooCon hacker conference.

“Lots of people use VPNs because they don’t actually know what they do,” said Yael Grauer, an investigative reporter at Consumer Reports. “People are spending a lot of money and they’re still getting hacked, or they’re spending a lot of money for protections they already have.”

James Troutman, a director of technology at Tilson Broadband, was more blunt in his own presentation later that same day.

“VPNs are internet snake oil,” Troutman said, comparing them to the worthless miracle cures that traveling salesmen used to peddle at the turn of the 20th century.

VPN claims vs. reality

Like the real snake oil, Troutman and Grauer explained, VPNs claim to resolve all sort of security and privacy ills, tossing around impressive-sounding but meaningless terms such as “unbreakable security,” “true privacy” and “military-grade encryption.”

The VPNs may claim in their ads and on their websites that they can protect your PC from hackers, or keep your passwords safe, or make sure that websites can’t track you. For that, they claim, it’s worth paying between $50 and $150 a year for their services.

In 2021, Grauer and a team from the University of Michigan tested 51 consumer VPN service providers. Along with Consumer Reports colleagues, she made more extensive analysis of 16 major VPN brands, including CyberGhost, ExpressVPN, Hotspot Shield, IPVanish, NordVPN, Private Internet Access, ProtonVPN and Surfshark. (Grauer and Troutman both warned against using lesser-known VPNs, especially free VPN services that pop up in mobile app stores.)

Grauer found that of the 16 well-known VPN services she analyzed, 12 made exaggerated claims about how much protection they really could provide.

One well-known VPN said “your data will never be compromised” if you used it, Grauer  documented in her white paper. Another VPN said it would “protect [you] from hackers and online tracking.” A third promised “absolute privacy on all devices,” and another guaranteed “anonymous surfing.”

Better privacy, but not better security

The fact is, Grauer and Troutman said, that VPNs can’t protect you from hackers or malware. While VPNs do increase your online privacy, they’re not doing much to make your computers and systems more secure.

VPNs also can’t stop your personal information from being disclosed in data breaches. They can’t stop websites from tracking you — there are many other ways to track you online besides just following your Internet Protocol (IP) address.

VPNs can’t prevent you from landing on phishing sites or from being tricked into giving your passwords to a criminal. They can’t “guarantee” your privacy, Troutman said.

“When people ask me if they should use a VPN,” Grauer said, “I tell them no, they should use a password manager instead.”

However, four of the 16 VPNs that Grauer and her team closely analyzed got high marks for honesty.

IVPN, Mozilla VPN, Mullvad and TunnelBear were clear and accurate about what VPNs could and couldn’t do. They also gave potential customers suggestions about other security and privacy best practices they could take, such as using two-factor authentication (2FA) and blocking browser trackers.

What VPNs can do

Both Grauer and Troutman said that there are legitimate reasons to use VPNs, and that for the most part, the better-known VPNs do a good job of making your network connections more private.

VPNs protect against “man-in-the-middle” attacks that you might encounter using open Wi-Fi networks in a coffeeshop or hotel, even though the risks of that are small now that most websites use encrypted connections.

VPNs make it more difficult for internet service providers (ISPs) to see which websites you’re visiting, although Troutman pointed out that your VPN will be seeing that information instead.

VPNs can help people in repressive countries evade mass censorship, such as Russia’s recent blocking of Facebook and Instagram. And, of course, VPNs often (but not always) can let you access overseas Netflix and other services that are geographically restricted.

But, Troutman said, VPNs in practice can’t do much to protect specific individuals from state surveillance. National intelligence agencies have means at their disposal that can easily evade the protections a VPN would provide a targeted person.

“Mossad is gonna Mossad,” Troutman said.

Grauer and Troutman added that while VPNs do a good job of masking the “old” form of IP addresses, known as IPv4, they don’t always work well with IP addresses using the newer IPv6 standard.

That’s because many devices’ IPv6 addresses are tied to the devices’ unique network-hardware information, part of a well-documented network privacy flaw that extends beyond VPN use.

What’s behind the VPN push

Yet the consumer VPN industry has grown to take in an estimated $30 billion per year, partly through repeating unverifiable claims and exploiting consumers’ fear of surveillance technology, Troutman said.

One big impetus for VPN adoption was Edward Snowden’s 2013 leaks of NSA documents that showed how extensive American data collection could be. Another was the U.S. Senate’s 2017 vote to block an FCC rule that would have prevented ISPs from reselling data about consumer behavior. And finally, many security experts and security-focused websites, including Tom’s Guide, did and do still recommend using VPNs.

VPN providers launched advertising campaigns around these issues, claiming that only paying for their services could preserve your online privacy. Advertising is still a big part of the industry.

“How many of you listen to podcasts?” Troutman asked the ShmooCon crowd. “It seems that every podcast is sponsored by a VPN.”

You can’t always count on review websites to provide honest information about VPNs. Troutman and Grauer pointed out that many of the VPN “review” sites you can find through a Google search are actually owned by VPN providers.

Even if a site recommends more than one VPN, recent VPN industry consolidation means that many of the largest brands are owned by the same few companies.

You bought the biggest threat to your privacy

Yet, as Troutman pointed out, the biggest threat to your privacy probably isn’t your ISP, or the websites you visit on your PC, or even (for most people) the NSA, CIA, Russians, Iranians or Chinese.

Instead, the biggest threat to your privacy is the smartphone you paid a lot of money for and carry around in your pocket.

It’s a state-of-the-art tracking device that constantly transmits thousands of data points about your online activities, your physical location, your travels, your health and your interests to the phone’s manufacturer, to your wireless carrier, and to the makers of most of the apps you have installed — “pervasive and sophisticated online user activity surveillance,” as Troutman put it.

Using a VPN on your smartphone will temporarily confuse some of these tracking methods, Troutman said, but not for long. There are many other methods of collecting your behaviour and information that don’t depend on an IP address.

What VPNs really are good for

So is there any downside to using a VPN that stretches the truth? Not that much, other than that you may be paying for something you may not need.

Grauer and her team found that most of the 16 top providers she looked at used strong encryption, had no known security flaws, didn’t collect much user information, didn’t share information with third parties, and had clear and easy-to-find terms of service.

They also found that if a VPN provider made exaggerated claims in bold letters about the benefits of using its services, those claims were often dialled back in the fine print.

Many of the top providers, however, could be more transparent about whether they log user activity, Grauer said. Almost all VPNs claim they don’t log what their users do, but Grauer’s team found that the VPN client software used by several top providers kept logs on users’ computers.

Many VPNs could also be clearer about how long they keep the user data they do collect, and many don’t let users see what has been collected about them.

Who should use a VPN?

So should you use a VPN? It depends what you want to use it for, said Troutman. Many ISPs keep logs of customer behaviour for years, and if that bothers you and you can find a VPN that you trust more than your ISP, go ahead and use it.

Frequent travellers who need secure connections while abroad will also need VPNs, although streaming content across national borders isn’t as reliable as it was a few years ago. And if you’re doing anything illegal in the country you happen to live in, a VPN should just be the first step in masking your online activities.

But for the average home user who isn’t concerned about what their ISP knows and doesn’t need to access streaming services from overseas, paying for a VPN might not be worth it.

Feature Image Credit: Wright Studio/Shutterstock

By

Paul Wagenseil is a senior editor at Tom’s Guide focused on security and privacy. He has also been a dishwasher, fry cook, long-haul driver, code monkey and video editor. He’s been rooting around in the information-security space for more than 15 years at FoxNews.com, SecurityNewsDaily, TechNewsDaily and Tom’s Guide, has presented talks at the ShmooCon, DerbyCon and BSides Las Vegas hacker conferences, shown up in random TV news spots and even moderated a panel discussion at the CEDIA home-technology conference. You can follow his rants on Twitter at @snd_wagenseil.

Sourced from tom’s guide

By

Researchers design a user-friendly interface that helps nonexperts make forecasts using data collected over time.

Whether someone is trying to predict tomorrow’s weather, forecast future stock prices, identify missed opportunities for sales in retail, or estimate a patient’s risk of developing a disease, they will likely need to interpret time-series data, which are a collection of observations recorded over time.

Making predictions using time-series data typically requires several data-processing steps and the use of complex machine-learning algorithms, which have such a steep learning curve they aren’t readily accessible to nonexperts.

To make these powerful tools more user-friendly, MIT researchers developed a system that directly integrates prediction functionality on top of an existing time-series database. Their simplified interface, which they call tspDB (time series predict database), does all the complex modelling behind the scenes so a nonexpert can easily generate a prediction in only a few seconds.

The new system is more accurate and more efficient than state-of-the-art deep learning methods when performing two tasks: predicting future values and filling in missing data points.

One reason tspDB is so successful is that it incorporates a novel time-series-prediction algorithm, explains electrical engineering and computer science (EECS) graduate student Abdullah Alomar, an author of a recent research paper in which he and his co-authors describe the algorithm. This algorithm is especially effective at making predictions on multivariate time-series data, which are data that have more than one time-dependent variable. In a weather database, for instance, temperature, dew point, and cloud cover each depend on their past values.

The algorithm also estimates the volatility of a multivariate time series to provide the user with a confidence level for its predictions.

“Even as the time-series data becomes more and more complex, this algorithm can effectively capture any time-series structure out there. It feels like we have found the right lens to look at the model complexity of time-series data,” says senior author Devavrat Shah, the Andrew and Erna Viterbi Professor in EECS and a member of the Institute for Data, Systems, and Society and of the Laboratory for Information and Decision Systems.

Joining Alomar and Shah on the paper is lead author Anish Agrawal, a former EECS graduate student who is currently a postdoc at the Simons Institute at the University of California at Berkeley. The research will be presented at the ACM SIGMETRICS conference.

Adapting a new algorithm

Shah and his collaborators have been working on the problem of interpreting time-series data for years, adapting different algorithms and integrating them into tspDB as they built the interface.

About four years ago, they learned about a particularly powerful classical algorithm, called singular spectrum analysis (SSA), that imputes and forecasts single time series. Imputation is the process of replacing missing values or correcting past values. While this algorithm required manual parameter selection, the researchers suspected it could enable their interface to make effective predictions using time series data. In earlier work, they removed this need to manually intervene for algorithmic implementation.

The algorithm for single time series transformed it into a matrix and utilized matrix estimation procedures. The key intellectual challenge was how to adapt it to utilize multiple time series.  After a few years of struggle, they realized the answer was something very simple: “Stack” the matrices for each individual time series, treat it as a one big matrix, and then apply the single time-series algorithm on it.

This utilizes information across multiple time series naturally — both across the time series and across time, which they describe in their new paper.

This recent publication also discusses interesting alternatives, where instead of transforming the multivariate time series into a big matrix, it is viewed as a three-dimensional tensor. A tensor is a multi-dimensional array, or grid, of numbers. This established a promising connection between the classical field of time series analysis and the growing field of tensor estimation, Alomar says.

“The variant of mSSA that we introduced actually captures all of that beautifully. So, not only does it provide the most likely estimation, but a time-varying confidence interval, as well,” Shah says.

The simpler, the better

They tested the adapted mSSA against other state-of-the-art algorithms, including deep-learning methods, on real-world time-series datasets with inputs drawn from the electricity grid, traffic patterns, and financial markets.

Their algorithm outperformed all the others on imputation and it outperformed all but one of the other algorithms when it came to forecasting future values. The researchers also demonstrated that their tweaked version of mSSA can be applied to any kind of time-series data.

“One reason I think this works so well is that the model captures a lot of time series dynamics, but at the end of the day, it is still a simple model. When you are working with something simple like this, instead of a neural network that can easily overfit the data, you can actually perform better,” Alomar says.

The impressive performance of mSSA is what makes tspDB so effective, Shah explains. Now, their goal is to make this algorithm accessible to everyone.

One a user installs tspDB on top of an existing database, they can run a prediction query with just a few keystrokes in about 0.9 milliseconds, as compared to 0.5 milliseconds for a standard search query. The confidence intervals are also designed to help nonexperts to make a more informed decision by incorporating the degree of uncertainty of the predictions into their decision making.

For instance, the system could enable a nonexpert to predict future stock prices with high accuracy in just a few minutes, even if the time-series dataset contains missing values.

Now that the researchers have shown why mSSA works so well, they are targeting new algorithms that can be incorporated into tspDB. One of these algorithms utilizes the same model to automatically enable change point detection, so if the user believes their time series will change its behaviour at some point, the system will automatically detect that change and incorporate that into its predictions.

They also want to continue gathering feedback from current tspDB users to see how they can improve the system’s functionality and user-friendliness, Shah says.

“Our interest at the highest level is to make tspDB a success in the form of a broadly utilizable, open-source system. Time-series data are very important, and this is a beautiful concept of actually building prediction functionalities directly into the database. It has never been done before, and so we want to make sure the world uses it,” he says.

“This work is very interesting for a number of reasons. It provides a practical variant of mSSA which requires no hand tuning, they provide the first known analysis of mSSA, and the authors demonstrate the real-world value of their algorithm by being competitive with or out-performing several known algorithms for imputations and predictions in (multivariate) time series for several real-world data sets,” says Vishal Misra, a professor of computer science at Columbia University who was not involved with this research. “At the heart of it all is the beautiful modelling work where they cleverly exploit correlations across time (within a time series) and space (across time series) to create a low-rank spatiotemporal factor representation of a multivariate time series. Importantly this model connects the field of time series analysis to that of the rapidly evolving topic of tensor completion, and I expect a lot of follow-on research spurred by this paper.”

Feature Image Credit: Courtesy of the researchers and edited by MIT News

By

Sourced from MIT News

Given its privacy-oriented, opt-in nature, email is entering the spotlight as a tool for publishers to directly communicate with their readers and own more of the traffic that goes to their sites. As with all channels in the marketing mix, however, customers have expectations when it comes to personalization, context and relevance

However, given the increasing availability of utilizable, first-party data combined with advancements in machine learning and natural language processing, publishers are empowered to deliver personalization beyond a simple salutation to better serve their readers, drive loyalty and optimize the email newsletter as prime real estate for monetization.

To highlight the ways in which publishers are currently using personalization in email and how they plan to evolve their strategies, Digiday and Jeeng surveyed nearly 90 publisher representatives. This report delves into the results, and in conjunction with expert insights, provides an overview of the changing role of email personalization for publishers and how companies are adapting accordingly.

Download this new report to learn:

  • How personalization is evolving for publishers
  • How publishers are approaching personalization challenges
  • What outcomes are achieved with an optimized personalization strategy
  • Where personalization is heading in 2022 and beyond

 

By Jeeng

Sourced from DIGIDAY

Sourced from Entrepreneur

These Upskillist Courses can help you start a side hustle.

These days, if it feels like everybody has a side hustle, it’s because a whole lot of people have a side hustle. More than a third of Americans have a side hustle today, and two-thirds of them started it in the last three years.

You haven’t been left behind if you haven’t picked up a side hustle yet, but consider it a good growth opportunity for 2022. During our Best of Digital Sale, you can save an extra 50 percent off courses from Upskillist to help you get started. Check out some of the courses on sale.

1. Learn the Basics of Cryptocurrency

Crypto may not be a side hustle exactly, but it certainly could be a means of passive income. In this course, you’ll learn what cryptocurrency is, understand more about it, and learn some of the most proven trading strategies to earn a profit.

Get Learn the Basics of Cryptocurrency for $10 (reg. $1,200) with promo code LEARNNOW.

2. Learn the Basics of Digital Marketing & SEO

In the digital age, every business needs a strong focus on digital marketing and not every business knows how to do it. Carve out a role as a consultant or digital marketing expert with this course that covers a variety of channels. From SEO and content marketing to paid ads, Google Analytics, and more, you’ll get a solid foundation for a digital marketing education.

Get Learn the Basics of Digital Marketing & SEO for $10 (reg. $1,200) with promo code LEARNNOW.

3. Learn the Basics of Computer Science

Who cares if you missed out on computer science courses in college? This course will give a solid foundation in comp-sci. You’ll go through computer architecture, understand the binary system, learn how programming languages work, and much more.

Get Learn the Basics of Computer Science for $10 (reg. $1,200) with promo code LEARNNOW.

4. Learn the Basics of Coding & Technology

Want to learn to code? This 15-hour course will get you started. You’ll explore mobile tech, operating systems, software development, big data, and many more leading tech topics. As you work with different technologies, you’ll learn coding essentials to help you work with them more efficiently.

Get Learn the Basics of Coding & Technology for $10 (reg. $1,200) with promo code LEARNNOW.

5. Learn the Basics of Graphic Design

Great design can make a huge difference for businesses. But it’s expensive, which is why many businesses outsource. Be the designer they need with this 15-hour course covering some of today’s most important graphic design tools, design theory, and more.

Get Learn the Basics of Graphic Design for $10 (reg. $1,200) with promo code LEARNNOW.

6. Learn the Basics of Photoshop

For a more intensive focus in graphic design, take a deep dive into Photoshop. Photoshop is more than just a photo editor. You’ll learn just how much more this comprehensive design software can do in this extensive course.

Get Learn the Basics of Photoshop for $10 (reg. $1,200) with promo code LEARNNOW.

Prices subject to change.

Feature Image Credit: StackCommerce

Sourced from Entrepreneur

Sourced from Association of Advertisers in Ireland

We are delighted to welcome Larry Ryan, from Behaviour & Attitudes, to take part in our next Toolkit session on Thursday April 28th at 10am.

Date: 28th of April
Time: 10am
Location: Online
Registration: Here. 

Behaviour & Attitudes undertook a wide ranging consumer, customer and promoter study, together with a global review of responses and initiatives across the music sector to help the National Concert Hall develop a safe route back to live performance in 2021. The research undertaken was used to help develop the National plan for the return to live performance across the music industry and was recently awarded a special 2021 Covid-19 Research Excellence Award by the Marketing Society of Ireland.

Larry Ryan of B&A will take us through a review of the challenges and approaches involved in the study. He will demonstrate how it enabled the Concert Hall to emerge from the pandemic ahead of its private sector competitors and chart a trail blazing return to profitable live performance.

Larry has been a researcher of trends, themes and sectors for more than 35 years with stints at Lansdowne, MRBI and Guinness before joining B&A, where he is one of the joint owners, in 1997. He has a particular interest in studies related to popular culture/media, education, housing/development and healthcare.

Click HERE to register

By Anete Lusina

Two Stanford researchers have found widespread use of fake Linkedin accounts created with artificial intelligence-generated (AI) profile photos. These profiles target real users in an attempt to increase interest in certain companies before passing the successful leads to a real salesperson

Misinformation online takes different forms — from false or skewed facts presented as the truth to machine-generated photos and videos, that can be used for a variety of unethical and damaging purposes.

AI Photos Used as Fake Profile Photos

Two researchers, Renée DiResta and Josh Goldstein from the Stanford Internet Observatory, discovered that Linkedin, the same as Facebook and Twitter, is not immune to this digital age problem. In the case of Linkedin, they found that bots using AI-generated faces — as many as 1,000 fake profiles — are being leveraged to create false buzz around some companies, reports The Register.

The process is simple: a bot with an AI-generated profile photo contacts an unsuspecting Linkedin user and, if the target shows interest, they get passed on to a real salesperson to continue the conversation.

The two researchers made the discovery after DiResta received a message from a profile belonging to a “Keenan Ramsey.” At first, it looked like a normal sales pitch from a software company but it soon became clear that Ramsey was a fictitious person — the fake profile headshot contained multiple red flags, like the unusually central alignment of eyes, only one earring, and parts of hair were blurred into the background.

After the AI-generated profile photo jumped out as a fake, DiResta, who has also studied Russian disinformation campaigns and anti-vaccine conspiracies, began looking into the matter with her colleague Josh Goldstein and found over 1,000 profiles using AI-generated photos.

Using AI to Cut Down on Hiring Costs

Companies use profiles like these to cast a wide net of potential leads without having to use real sales staff and to avoid hitting Linkedin message limits. It was found that more than 70 businesses were listed as employers of fake profiles, with some companies telling NPR that they hired outside marketers to help with sales but hadn’t authorized the use of AI-generated photos, and were surprised by these findings.

“We are constantly updating our technical defences to better identify fake profiles and remove them from our community, as we have in this case,” Spilman says. “At the end of the day, it’s all about making sure our members can connect with real people, and we’re focused on ensuring they have a safe environment to do just that.”

Difficult For Naked Eye to Detect Truth

Although some businesses may employ AI-assisted marketing tactics because they are cheaper than employing real people, it’s difficult for users on the other side of the screen to distinguish between a fake or real profile photo — a recent study by PNAS found that people have a 50% chance of guessing correctly. The research also found that some people find machine-generated faces more trustworthy because AI often uses average facial features, suspects Hany Farid, co-author of the study.

To make it easier for people to tell real and fake profiles apart, V7 Labs created a new AI software that works as a Google Chrome extension and is capable of detecting profiles belonging to a bot, with a claimed 99.28% accuracy.

The V7 Labs’s “Fake Profile Detector” extension aims to help authorities and regular Internet users spot and report profiles that spread fake news or otherwise create misleading content.

Feature Image Credit: All photos are AI-generated via This Person Does Not Exist/

By Anete Lusina

Sourced from PetaPixel

By Beanstalk Web Solutions

Deciding on the right mix of marketing strategies is a huge undertaking for any business. Some might feel compelled to dive into the deep end and get into the weeds with all the tools at their disposal, while others might prefer to survey their options and build a thorough plan before accomplishing much of anything.

These two different approaches can be summed up in the terms tactical and strategic marketing. Whether an organization is looking to start from scratch or simply trying to beef up existing marketing efforts, knowing how to best leverage both is a huge advantage.

What is tactical marketing?

When it comes to digital marketing, the tactics a business uses can range from email blasts to blogging and from organic social posts to Google Ads. Tactical marketing means focusing on those individual elements. Some common digital marketing tactics include:

  • Blogging: Keeping a blog up to date can be useful for search engine optimization (SEO) by targeting specific keywords. If SEO efforts are successful, a business will rank higher when its target audience searches for its products or services.
  • Email marketing: Regular email newsletters can engage a customer base over time. Remarketing is another effective form of email marketing where those who have interacted with an organization’s site before (for example, added something to their cart but not checked out) get reminders of their previous interest right in their inbox.
  • Ads: There is a wide variety of digital ad opportunities. Paid social and paid search are some of the most common. Google Ads is its own beast entirely, and each major social site has a way to run ads where your potential customers are already spending their time.
  • Organic posts: We just touched on social ads, but creating an organic social following through the right message can also be an effective tactic.

What is strategic marketing?

Strategic marketing looks at the big picture. While tactical marketing can sometimes miss the forest for the trees, strategic marketing zooms out over the whole landscape. Setting goals, coordinating with other departments and creating an overarching message are all part of this type of marketing.

Research is another vital element to building a successful marketing strategy. Knowing what’s possible, what tools are available and what others in the industry are up to are just some of the important things to know before crafting a digital marketing strategy.

This research element is one of the issues some people run into when they attempt strategic marketing. There might be a steep learning curve. So, instead of struggling alone, some opt to bring on digital marketing experts in some fashion. Having a team well-versed in the field can be a huge boon when it comes to choosing the right goals, techniques and messages.

Which is better?

The bottom line is both kinds of marketing go hand-in-hand. Ignoring one or the other is a surefire way to hamstring a marketing effort before it ever gets up and running.

An overall strategy ensures cohesion between the individual tactics. Messages between different ads, posts and emails should all be in harmony with each other, and the only way to do that is to look at the broader strategy. It is also important to have a goal to work toward so that a business can measure its digital marketing success.

Without putting the tactics that build up that strategy into practice, though, no organization is going to reach its goals. Lots of businesses will decide they want to try a tactic like Google Ads or email marketing and just jump straight into building those campaigns. It might seem like diving into the thick of things is a quick way to get results, but ignoring that overall strategy is just a quick way to waste time and resources.

For more information about the success of strategic and tactical marketing, check out Beanstalk Web Solutions’ case study.

By Beanstalk Web Solutions

Beanstalk Web Solutions is a St. Louis-based web design and digital marketing agency. Contact Beanstalk to learn more about the right mix of strategic and tactical digital marketing opportunities for your business.

Sourced from St Louis Business Journal

By Paul Kirvan

AI provides key enhancements to existing emergency notifications systems that can reduce the amount of time a business needs to effectively prepare for and respond to a crisis.

Crisis communications have come a long way from call trees and text chains. Today’s emergency notification systems and cloud-based notification services are far more effective than relying on employees to call each other.

However, these developments have not made crisis communications foolproof. For example, if emergency messages never reach their intended recipients, the sender might not get a notification of the message delivery failure. If a reply message is not generated, an organization’s emergency teams could be facing an incident that escalates into a full-blown crisis due to the lack of clear communication.

Artificial intelligence (AI) and machine learning (ML) are highly proficient in capturing a wide variety of data inputs and then making predictions and emergency recommendations. Organizations can use these technologies for identification and classification of emergency tasks, as well as to provide communications and intelligence at the right time and to the right people. AI has a role to play in the future of crisis communications, and it’s only just getting started.

What does AI bring to the table?

AI and ML can provide additional value to emergency notification system (ENS) technology. Today, ENSes are generally programmed to disseminate a variety of message types, such as email, text and SMS, to preset lists of individuals. While some more traditional systems can request replies from message recipients, AI-enhanced systems can do that and more.

AI crisis communications systems can use multiple channels of information to provide value to emergency message delivery. These channels can include weather forecast data or drone-generated video, among others. An AI-enabled ENS, for example, can take weather data generated by the National Oceanic and Atmospheric Administration and translate it into forecast data that can then be formatted into a series of alert messages helping people to prepare for an impending hurricane or other severe weather.

Another example of AI-enhanced crisis communications is using the system to ask specific questions about a situation, such as the likelihood of tornadoes or other natural disasters forming. The system can examine multiple resources to provide message recommendations and other analyses.

Inclusion of AI and ML technology is increasingly found in ENS offerings from traditional vendors as well as messaging system vendors. It is up to the user to determine which AI-enabled capabilities will be best suited to the organization and how it will add value to corporate ENS requirements. Non-AI systems will still provide rapid dissemination of emergency messages, and many can support reply messages, so at that point the added edge — and expense — of AI becomes a business decision.

AI-enhanced vs. traditional ENS

Earlier ENS technology was largely on site, with a server designated to provide ENS functions connected to either landlines from the local telephone company or via the internet to deliver messages. Figure 1 depicts how a traditional premises-based ENS uses the internet to deliver messages.

Diagram of a non-hosted ENS
Figure 1. A non-hosted emergency notification system.

By contrast, today’s systems are often hosted by a specialized ENS vendor, with the technology in the cloud. All resources are located with the vendor, and access is as simple as using a laptop or smartphone. Figure 2 depicts a hosted ENS configuration. Users are completely dependent on the ENS vendor to deliver emergency messages when the system launches.

Diagram of a hosted ENS
Figure 2. A hosted emergency notification system.

When AI and ML are in the mix, the configuration is largely unchanged except for the added capabilities of the ENS when AI and ML are implemented. Figure 3 shows a possible configuration of an AI-enabled, cloud-based ENS.

Diagram of an AI- and ML-enabled ENS
Figure 3. An AI- and ML-enabled emergency notification system.

Traditional ENS message delivery and reply features are enabled, and AI capabilities add value by using a variety of other resources.

Market options and pricing

Prices for standalone crisis communications systems can range from under $5,000 to well over $200,000.

Managed ENS offerings usually require payment of a monthly fee for the service. This is typically based on the number of contacts in the database, the features being used and the network transport services that deliver the messages. There can also be activation fees when the system is used in a disaster, and some systems will have setup fees. Monthly fees can range from under $500 to over $25,000, depending on the system configuration.

Hosted ENS tools require no physical space for equipment, there are minimal or no upfront installation fees, and customers can discontinue service with minimal technical effect on the organization. The inclusion of AI features will vary by vendor, and organizations should carefully research all options before making a buying decision.

Organizations that already use emergency notification systems will need to evaluate the added value versus the cost to upgrade their existing tool to an AI-enabled one. For example, an existing system might not be upgradeable to one with AI, and a replacement would be needed.

There are several crisis communications vendors that offer AI- or ML-enabled platforms and products. Vendor options in this market include the following:

  • Omnilert started as the developer of a campus emergency communications system. Current hosted products use AI to detect, analyze and visualize emergency situations through intelligent data capture and analysis, and the products offer an easy-to-use interface. Omnilert offers a free trial; check with the vendor for more pricing information.
  • Quiq provides an AI-enabled messaging platform that businesses can adapt to different situations, such as customer order placement and customer service inquiries. Although ENS is not specifically listed as an application, the Quiq platform is easily adaptable to crisis communications applications. Pricing begins at $12,000 per year.
  • OnSolve offers a variety of hosted ENS tools. It also has an AI engine to provide emergency intelligence that businesses can use for decision-making. Pricing ranges from a basic system for under $2,000 to more complex systems with a variety of pricing plans.
  • Everbridge offers numerous ENS options for many different applications and uses AI functionality to analyse data from multiple sources to provide intelligence for emergency management. The company offers on-site as well as managed emergency notification services, with fixed and monthly pricing plans.

By Paul Kirvan

Sourced from TechTarget