Tag

Google

Browsing

Google has reduced the price of its OnHub routers at the Google Store. The ASUS branded OnHub now costs $129, $70 off the regular price, and the TP-Link OnHub comes in at $145, $54 off the normal asking price.

The routers’ chief purpose is to deliver faster and more consistent Wi-Fi throughput, in a more intuitive package. They offer speeds up to 1900 mbps, come with 13 powerful antennas to reduce the number of dead zones in a household/office, and support more than 100 simultaneous connected devices. What’s more, they work with the Google Wi-Fi App for Android and iOS so you can control your network with your phone.

Of the pair of wireless routers, the ASUS version is the more modern, it arrived a number of months after the original TP-Link unit and offers some additional “wave control” gestures. Both have been superseded by the Google Wi-Fi router from last year, but they’re still capable products and work with the Google Wi-Fi mesh network if you want to use one as an additional access point.

If you’re interested in these, check out the ASUS OnHub at the Google Store here, and the TP-Link OnHub here. Google hasn’t mentioned a time-frame for the price cuts, so it’s possible that these have been dropped indefinitely to clear stock.

Sourced from Android Authority

By Annelieke van den Berg

Is it purely a visitor that hits the back button or is there more to it? And what can you tell by looking at the bounce rate of a webpage? In this post, I want to show you what it is, what it means and how you can improve your bounce rate.

What’s bounce rate?

Bounce rate is a metric that measures the percentage of people who land on your website, and do completely nothing on the page they entered. So they don’t click on a menu item, a ‘read more’ link, or any other internal links on the page. This means that the Google Analytics server doesn’t receive a trigger from the visitor. A user bounces when there has been no engagement with the landing page and the visit ends with a single-page visit. You can use bounce rate as a metric that indicates the quality of a webpage and/or the “quality” of your audience. By quality of your audience I mean whether the audience fits the purpose of your site.

How does Google Analytics calculate bounce rate?

According to Google bounce rate is calculated in the following way:

Bounce rate is single-page sessions divided by all sessions, or the percentage of all sessions on your site in which users viewed only a single page and triggered only a single request to the Analytics server.

In other words, it collects all sessions where a visitor only visited one page and divides it by all sessions.

Having a high bounce rate can mean three things:
1. The quality of the page is low. There’s nothing inviting to engage with.
2. Your audience doesn’t match the purpose of the page, as they won’t engage with your page.
3. Visitors have found the information that they were looking for.

I’ll get back to the meaning of bounce rate further below.

Bounce rate and SEO

In this post, I’m talking about bounce rate in Google Analytics. There’s been a lot of discussion about whether bounce rate is an SEO ranking factor. In my opinion, I can hardly imagine that Google takes Google Analytics’ data as a ranking factor, because if Google Analytics isn’t implemented correctly, then the data isn’t reliable. Moreover, you can easily manipulate the bounce rate.

Luckily, several Googlers say the same thing: Google doesn’t use Google Analytics’ data in their search algorithm. But, of course, you need to make sure that when people come from a search engine to your site, they don’t bounce back to the search results, since that kind of bouncing probably is a ranking factor. It might be measured in a different way though, than the bounce rate we see in Google Analytics.

From a holistic SEO perspective, you need to optimize every aspect of your site. So looking closely at your bounce rate, can help you optimize your website even further, which contributes to your SEO.

How to interpret bounce rates?

The height of your bounce rate and whether that’s a good or a bad thing, really depends on the purpose of the page. If the purpose of the page is to purely inform, then a high bounce rate isn’t a bad thing per se. Of course you’d like people to read more articles on your website, subscribe to your newsletter and so on. But when they’ve only visited a page to, for instance, read a post or find an address, then it isn’t surprising that they close the tab after they’re done reading. Mind you, also in this case, there’s no trigger sent to the Google Analytics server, thus it’s a bounce.

A clever thing to do, when you own a blog, is creating a segment that only contains ‘New visitors’. If the bounce rate amongst new visitors is high, think about how you could improve their engagement with your site. Because you do want new visitors to engage with your site.

If the purpose of a page is to actively engage with your site, then a high bounce rate is a bad thing. Let’s say you have a page that has one goal: get visitors to subscribe to your newsletter. If that page has a high bounce rate, then you might need to optimize the page itself. By adding a clear call-to-action, a ‘Subscribe to our newsletter’ button, for instance, you could lower that bounce rate.

But there can be other causes for a high bounce rate on a newsletter subscription page. In case you’ve lured visitors in under false pretenses, you shouldn’t be surprised when these visitors don’t engage with your page. They probably expected something else when landing on your subscription page. On the other hand, if you’ve been very clear from the start with what visitors could expect on the subscription page, a low bounce rate could say something about the quality of the visitors – they could be very motivated to get the newsletter – and not per se about the quality of the page.

Bounce rate and conversion

If you look at bounce rate from a conversion perspective, then bounce rate can be used as a metric to measure success. For instance, let’s say you’ve changed the design of your page hoping that it will convert better, then make sure to keep an eye on the bounce rate of that page. If you’re seeing an increase in bounces, the change in design you’ve made might have been the wrong change and it could explain the low conversion rate you have.

You could also check the bounce rate of your most popular pages. Which pages have a low and which pages have a high bounce rate? Compare the two, then learn from the pages with low bounce rates.

Another way of looking at your bounce rate, is from a traffic sources perspective. Which traffic sources lead to a high or a low bounce rate? Your newsletter for instance? Or a referral website that sends a lot of traffic? Can you figure out what causes this bounce rate? And if you’re running an AdWords campaign, you should keep an eye on the bounce rate of that traffic source as well.

Be careful with drawing conclusions though…

We’ve seen loads of clients with a bounce rate that was unnaturally low. All alarm bells should go off, especially if you don’t expect it, as that probably means that Google Analytics isn’t implemented correctly. There are several things that influence bounce rate, because they send a trigger to the Google Analytics server and Google Analytics falsely recognizes it as an engagement. Usually, an unnaturally low bounce rate is caused by an event that triggers the Google Analytics server. Think of pop-ups, auto-play of videos or an event you’ve implemented that fires after 1 second.

Of course, if you’ve created an event that tracks scrolling counts, then having a low bounce rate is a good thing. It shows that people actually scroll down the page and read your content.

How to lower high bounce rates?

The only way of lowering your bounce rate is by amping up the engagement on your page. In my opinion, there are two ways of looking at bounce rate. From a traffic perspective and from a page perspective.

If certain traffic sources have high bounce rates, then you need to look at the expectations of those visitors coming to your site. Let’s say you’re running an ad on another website, and most people coming to your site via that ad bounce, then you’re not making their wish come true. You’re not living up to their expectations. Review the ad you’re running and see if it matches the page you’re showing. If not, make sure the page is a logical follow-up of the ad or vice versa.

If your page lives up to the expectations of your visitors, and the page still has a high bounce rate, then you have to look at the page itself. How’s the usability of the page? Is there a call-to-action above the fold on the page? Do you have internal links that point to related pages or posts? Do you have a menu that’s easy to use? Does the page invite people to look further on your site? These are all things you need to consider when optimizing your page.

What about exit rate?

The bounce rate is frequently mistaken for the exit rate. Literally, the exit rate is the percentage of pageviews that were last in the session. It says something about users deciding to end their session on your website on that particular page. Google’s support page gives some clear examples of the exit rates and bounce rates which makes the difference very clear. This comes directly from their page:

Monday: Page B > Page A > Page C > Exit
Tuesday: Page B > Exit
Wednesday: Page A > Page C > Page B > Exit
Thursday: Page C > Exit
Friday: Page B > Page C > Page A > Exit

The % Exit and Bounce Rate calculations are:

Exit Rate:
Page A: 33% (3 sessions included Page A, 1 session exited from Page A)
Page B: 50% (4 sessions included Page B, 2 sessions exited from Page B)
Page C: 50% (4 sessions included Page C, 2 sessions exited from Page C)

Bounce Rate:
Page A: 0% (one session began with Page A, but that was not a single-page session, so it has no Bounce Rate)
Page B: 33% (Bounce Rate is less than Exit Rate, because 3 sessions started with Page B, with one leading to a bounce)
Page C: 100% (one session started with Page C, and it lead to a bounce)

Conclusion

Bounce rate is a metric you can use to analyze your marketing efforts. You can use it to measure if you’re living up to your visitors’ expectations. And you can use the bounce rate to decide which pages need more attention. Meeting your visitors’ expectations and making your pages more inviting for visitors all leads to creating an awesome website. And we all know that awesome websites rank better!

By Annelieke van den Berg

Annelieke van den Berg manages the Content, SEO and Brand department at Yoast. She has her Master’s degree in Sociology and focuses on all things related to marketing. Read all about Annelieke »
View her other posts or find her on Linkedin

Sourced from yoast

By Deepak Gupta.

Google is working on a new a global-scale communication system that includes a large number of satellites (over 1000 satellites) and ground stations.

The communication system aims to increase the satellite signal coverage of the populated world, reduce satellite collision between satellites as they orbit the earth, and provide overlapping coverage to maintain satellite coverage of a portion of the earth when one of the satellites covering the same portion is experiencing a malfunction.

PatentYogi’s expert patent search team discovered a recent patent application from Google that discloses details of this global-scale communication system The patent application is embedded in the post below.

According to the patent application, a first group of satellites is launched at a higher altitude, which allows the global-scale communication system to have fewer satellites at the higher altitude since the higher a satellite is, the larger coverage area the satellite covers. Thereafter, a second group of satellites is launched that includes a larger number of satellites, providing a bigger system bandwidth and supporting an increase number of users using the global scale communication system. The first group of satellites and the second group of satellites are arranged to provide at least 75% coverage of the earth at any given time.

Structure of the Satellite Constellation

Multiple satellites in the first group and the second group work in concert form a satellite constellation. The satellites within the satellite constellation are coordinated to operate together and overlap in ground coverage to avoid communication downtime when a satellite is experiencing problems (e.g., mechanical, electrical, or communication). The satellite constellation includes satellites having enough inter-satellite links to make the constellation fully-connected, where each satellite is equipped with communication equipment and additional antennas to track the location of other satellites in the same plane or in other adjacent planes in order to communicate with the other satellites.

The ubiquitous double coverage of the earth caused by the satellites of the first group and the satellites of the second group allows for outages of satellites (individual spacecraft failures), power sharing throughout the constellation of satellites, and multiple LOS opportunities for every ground-space link, offering an ability to de-conflict from “keep away” lines of sight (e.g., to Geosynchronous Satellites). The double coverage of satellites also allows maximum utilization of the individual spacecraft’s’ footprints, with enhanced (more) overlap over populous areas, synchrony throughout the constellation, for predictable and safe conjunction miss distances at the orbit intersections, and low spacecraft count without recourse to polar inclinations that unnecessarily provide coverage of unpopulated arctic regions.

The satellites have a trajectory with an inclination angle of less than 90 degrees and greater than zero degrees with respect to the equator of the earth. The satellites have a hexagonal coverage footprint of the earth, which allows for the fewest number of satellites.

The satellite constellation may serve different purposes, such as military or civilian observation satellites, communication satellites, navigations satellites, weather satellites, and research satellites.

Google has been working on multiple projects to provide Internet connectivity across the world. This includes Google Loon project.

Patent Number – US 20170005719

By Deepak Gupta

Sourced from TNW

By Mary Jo Foley.

Google is looking to attract Microsoft enterprise customers with new beta versions of images for SQL Server Enterprise and Windows Server Core for Google Compute Engine.

Google is expanding support of Windows Server and SQL Server on the Google Cloud Platform with the goal of making its cloud “the best enterprise cloud environment.”

On February 1, Google made available beta versions of pre-configured images for Microsoft SQL Server Enterprise and Windows Server Core on its Google Compute Engine. Google also announced support for SQL Server AlwaysOn Availability Groups to shore up its enterprise high-availability and disaster-recovery story. And the company said all of its Windows Server images are now enabled with Windows Remote Management Support, including its Windows Server Core 2016 and 2012 R2 images.

Google Cloud Platform has lagged behind AWS and Microsoft Azure since its start. But, will key customers and new tools help turn the tide?

As of yesterday, Google Cloud customers can launch Compute Engine virtual machines with SQL Server Enterprise Edition pre-installed and pay by the minute for SQL Server and Windows Server licenses or bring their own licences. Beta versions of pre-configured images are available for SQL Server Enterprise 2012, 2014, and 2016.

This isn’t Google’s first foray into providing better support for Windows enterprise customers and developers on the Google Cloud. Last year, Google added ASP.NET, Visual Studio, and PowerShell support, as well as support for Windows Server 2016 on the Google Compute Engine.

Amazon already provides a full suite of images for Microsoft’s various enterprise products, including Windows Server 2016, Windows Server containers, and SQL Server 2016 for AWS cloud users, but this Microsoft-enterprise push is much newer for Google.

By

Sourced from ZDNet

By Ayaz Nanji.

Ranking among the top Google search results is increasingly driven by dynamic factors, such as content relevance and user intent, rather than static factors, such as the number of keywords and links on a webpage, according to recent research from Searchmetrics.

The annual Searchmetrics ranking factors report was based on an analysis conducted in 2016 of Google search results for 10,000 keywords. The researchers examined which webpages were presented in the top 10 mobile and desktop results for each keyword and then determined which factors correlated to high rankings.

The presence of a few technical requirements, such as H1 tags and HTTPS encryption, help pages rank well across almost all keywords, the analysis found.

However, many of the other factors that influence high search rank, such as time spent on site and click-through rate (CTR), are dependent on individual searchers and pieces of content.

The fact that top search results for keywords are now driven less and less by universal factors led Searchmetrics to conclude that marketers should increasingly focus on topic-specific SEO/content tactics rather than broad approaches.

As the researchers put it in the report: “Except for important technical standards, there are no longer any specific factors or benchmark values that are universally valid for all online marketers and SEOs. Instead, there are different ranking factors for every single industry, or even every single search query. And these now change continuously.”

About the research: The annual Searchmetrics ranking factors report was based on an analysis conducted in 2016 of Google search results for 10,000 keywords. The researchers examined which webpages were presented in the top 10 mobile and desktop results for each keyword and then determined which factors correlated to high rankings.

By Ayaz Nanji

Sourced from MarketingProfs

By Gavin O’Malley.

Hoping to get consumers’ attention? Good luck. More or less, that’s the conclusion of some fresh findings from Google.

Among other challenges, consumers are increasingly splitting their focus between multiple screens.

In fact, about half of users now rely on more than one type of gadget in an average day, while a fifth report using another device while concurrently using a computer.

“Fluid movement between devices changes our approach to marketing,” according to the search giant’s new report. “Consumers now interact with your brand concurrently on more than one type of device, making it critical to provide the same great experience across screens.”

Of those who browse the Web in an average day, almost half do so on multiple devices, while more than seven in 10 users browse the Web on their phones or computers.

In addition, marketers can no longer count on consumers to make room in their busy lives for large screens.

Indeed, in an average day, more than a quarter of all users only use a smartphone, which is nearly two times as many as those who only use a computer.

What’s more, among those who search, nearly 4 in 10 search only on a smartphone in an average day.

As a result of this broader shift to mobile, Google is now seeing more searches happening on smartphones than on computers.

Among those categories experiencing the most growth in mobile searches, home and garden has seen increase of 45% year-over-year, while apparel and consumer electronics each experienced a 40% bounce.

The data in Google’s new report is based on findings from a behavioral measurement of a convenience sample of nearly 12,000 opt-in Google users. The data was then calibrated to reflect a U.S. demographic of 18- to 49-year-old cross-device users.

By

Sourced from MediaPost

By Laurie Sullivan.

Rakuten Marketing engineers believe they have uncovered a measurement flaw in Omniture, Google Analytics, Coremetrics and other analytics packages that measure the click-through rates (CTRs) and cost per clicks (CPCs) for Facebook mobile campaigns.

In Rakuten’s Facebook Measurement Divide report released Wednesday, containing the analysis of client performance data, the company reveals discrepancies between Facebook conversion tracking and Web analytics costing advertisers insight into 192% more attributable revenue and higher return on ad spend.

The cross-device campaigns analyzed reveal that attributable revenue only comprised on average 5.6% of the total revenue generated across mobile-only, desktop-only and cross-device campaigns — and as little as 2.4% for one retailer in the study.

Bob Buch, SVP of social at Rakuten — which supports attribution, affiliate, search, mobile, lead generation and more — said when the company began digging into clients’ campaigns it found that the CTR metrics were significantly higher and the CPCs quite a bit lower. “Omniture was missing 80% of the revenue, which explains why marketers are not investing more in mobile,” he said. “I’m not saying there’s something inherently wrong with the platform, but I do know it is not measuring mobile accurately for nearly every client we work with.”

Buch believes the tracking is inconsistent with what advertisers see in their Web analytics for several reasons. For starters, there are technological challenges that prevent conversion tracking on Facebook from functioning correctly, he said. In other words, there are additional conversions happening that are simply not recorded anywhere.

The report goes into more detail, outlining how post-click conversions are tracked differently by Facebook conversion tracking than in Web analytics. It also suggests that Web analytics platforms that rely on cookies cannot accurately track cross-device conversions because of the inherent challenges with identifying consumers across devices, and that some discrepancies are attributable to certain mobile operating systems and Internet browser combinations blocking third-party cookies.

Buch sees some of Rakuten’s bigger clients apply what he calls a “mobile multiplier.” He also says that it will be interesting to see what Adobe, Google and other platforms do to correct this discrepancy. When asked whether Rakuten sees this discrepancy with other social sites, he says, “I suspect this type of discrepancy would happen on any walled garden where there’s a mobile app linking to a mobile Web site, but truthfully the other social sites are not advanced enough to see the data at scale. We just don’t have the data.”

By

Sourced from MediaPost

By Timothy B. Lee.

I was skeptical of the Touch Bar when I first read rumors about it earlier this week. Those rumors turned out to be true: The newest MacBook Pro has a small touchscreen above the keyboard, where there used to be a row of physical “function” keys.

But now that I’ve seen the Touch Bar in action in Apple’s presentation, I think it has the potential to be the biggest change in the way people use their Macs since Apple introduced multi-touch gestures on trackpads more than 10 years ago.

What’s more, the creation of the Touch Bar illustrates what’s so powerful about Apple’s distinctive model of innovation.

New Money logo
This article is part of New Money, a new section on economics, technology, and business.

Most technology companies have focused on one part of the technology “stack” and left the rest to others. In the Windows PC world, Intel made chips, Dell made computers, Microsoft made the Windows operating system, and Adobe made software like Photoshop. Companies took a similar approach in the world of Android smartphones, with Google making the software and a variety of companies making competing handsets. That approach has helped Windows and Android dominate their respective markets.

In contrast, Apple controls the entire “stack” for its products. It manufactures the hardware, writes a lot of the software, and even makes some of its own chips. This makes it hard to achieve a large market share, since it’s difficult for one company to serve a lot of different kinds of customers. But the example of the Touch Bar shows that the Apple approach still has some distinct advantages.

It’s hard to imagine anyone other than Apple successfully pulling off an ambitious innovation like the Touch Bar because it requires simultaneous investments on both the hardware and software sides of the business.

Apple’s ability to make dramatic changes to its platforms has been an important source of strength for the company. And it’s a big reason that two of Apple’s chief competitors — Google and Microsoft — have increasingly aped Apple’s business model in recent years.

Why it’s hard to bring features like the Touch Bar to Windows laptops

The idea of a Windows Touch Bar isn’t entirely hypothetical. Lenovo, one of the biggest players in the PC laptop business, tried to introduce a similar feature in 2014, called the Adaptive Keyboard. But the “execution was poor,” Tech Radar writes. “It was hard to tell which icons did what and difficult to customize the various modes.”

The feature never really took off. And on one level, this was a result of poor execution on Lenovo’s part. But there were deeper factors that made that result almost inevitable.

A feature like Touch Bar or Adaptive Keyboard is only going to succeed if it becomes a platform-wide standard. And on a decentralized platform like Windows, that creates a chicken-and-egg problem: Applications developers are only going to put in the effort to support it if it’s available on a lot of laptops. But laptop makers are only going to offer it if there’s a lot of application support.

This is a particularly severe problem in the Windows PC world precisely because the PC market is so competitive. The hardware for the Touch Bar is apparently expensive — Apple is charging $300 extra for the cheapest MacBook Pro with a Touch Bar compared with the entry-level MacBook Pro without it.

So if a PC maker added a Touch Bar to its laptops, it would be taking a big risk of getting undercut by competitors that skipped the Touch Bar and charged significantly less. This is probably one reason Lenovo’s adaptive keyboard was so much less impressive than the Touch Bar — the Chinese company couldn’t spend a lot on the feature and risk being priced out of the market.

Why Apple’s model can be good for innovation

Apple Announces Launch Of New Tablet ComputerFormer Apple CEO Steve Jobs. Photo by Justin Sullivan/Getty Images

Apple is in a better position, not just because people are already willing to pay a significant premium for Apple products but also because Apple’s ownership of the entire Mac platform allows the company to recoup more of the benefits from bets that work.

Apple has one final advantage: Because it controls 100 percent of Mac sales, it can give software makers confidence that a new feature is going to be widely adopted across the platform. Apple initially introduced the Touch Bar only on high-end MacBook Pros. But presumably over the next few years the company will add it to other laptops, as the company has done with other new features like the iSight camera and multi-touch trackpad.

Knowing this, software companies like Adobe (makers of Photoshop) are going to be more willing to make their own investments in supporting the technology, knowing that they’ll be able to recoup those benefits for years to come.

And Touch Bar is just one example. You can tell a similar story about Apple Pay, Apple’s electronic payment system. Getting it off the ground required buy-in from both stores and software developers. And that buy-in was easier to get when Apple could promise that tens of millions of iPhones would have the necessary hardware — including the TouchID fingerprint sensor — over the next couple of years. Now Apple is adding the technology to the MacBook as well.

Google and Microsoft are shifting toward Apple’s model

Until a few years ago, the model Microsoft pioneered — provide the software and let others build the hardware — looked like a winner. The strategy allowed Microsoft to dominate the PC business and capture a large share of the value from the Windows ecosystem. Google pursued a similar strategy with Android, and today the platform dominates the smartphone market.

But both Microsoft and Google have found that this model has a big downside: With so many players involved, it can be hard to deliver a consistent user experience or introduce major new innovations.

For Microsoft, the big problem has been the rise of tablets. Microsoft has been anticipating the rise of tablets for more than a decade, and it has tried several times over the years to introduce more tablet-friendly versions of Windows.

But Microsoft relied on third parties both to produce the tablets and to write much of the software these tablets ran on. That often produced chaos, with different features being supported on different platforms and few common standards that software developers could rely on. So the tablet computing experience was often subpar, and users often just fell back to using an old-fashioned keyboard and trackpad.

Google has had a similar challenge with Android. Not only does Android run on a wide variety of different smartphones with different sizes, features, and processor speeds, but many smartphone makers also customize the Android software itself. This kind of “fragmentation” makes developing software for the Android platform a more frustrating experience. And it inherently makes it harder for Google to move the Android platform in bold new directions, since it has to wrangle a bunch of different Android hardware makers to go along.

This explains why both Microsoft and Google have become increasingly aggressive about building their own hardware instead of relying on third parties to do it.

“We’re not just building hardware for hardware’s sake,” Microsoft’s Satya Nadella said in 2015. “We plan to invent new personal computers and new personal computing.”

For the past few years, Microsoft has been building its own Surface tablets. That has made it easier for the company to engage in Apple-like innovations like the Surface Studio, a desktop computer Microsoft introduced this week with a giant touchscreen display.

For its part, Google adopted the Apple model in earnest only this month with the introduction of the Pixel, the first fully Google-made smartphone.

Microsoft and Google are both hoping that adopting Apple’s business model will allow them to duplicate Apple’s record of innovation and, ultimately, Apple’s profits. But doing that won’t be easy. Apple has had 30 years to develop its expertise at the wide variety of functions — hardware design, software design, chip design, supply chain management, marketing, retail, and so forth — that go into bringing a MacBook or an iPhone to market. Google and Microsoft have a lot of catching up to do.

By Timothy B. Lee

Sourced from VOX