Tag

Cloud Computing

Browsing

By Jack Kelly

As we stand on the cusp of a new year, the job market continues to evolve at an unprecedented pace, driven by technological advancements and shifting economic realities. In this dynamic environment, professionals across all industries are recognizing the critical importance of upskilling and reskilling to remain competitive and relevant.

The coming year presents a golden opportunity to invest in yourself by acquiring the in-demand skills that employers are actively seeking, ensuring you’re well-positioned for career growth and new opportunities in an increasingly digital and automated world.

The rapid acceleration of digital transformation, catalysed by recent global events, has reshaped the way businesses operate and the skills they require from their workforce. From artificial intelligence and data analytics to cloud computing and cybersecurity, the demand for tech-savvy professionals continues to soar across sectors.

In-Demand Hard Skills For The New Year

As traditional job roles evolve and new positions emerge, the ability to learn and adapt quickly has become a critical asset in itself. By proactively developing these in-demand hard skills, you not only enhance your marketability but also position yourself to thrive in the face of future disruptions and opportunities in the job market.

1. Artificial Intelligence and Machine Learning

AI and machine learning are becoming indispensable skills in the job market, with their importance growing exponentially across industries. The demand for AI-related skills is 3.5 times higher than the average job skill, reflecting the rapid integration of these technologies in various sectors, a PwC report revealed.

This surge in demand is driven by the transformative potential of AI and ML in the workplace. This fast-emerging technology is expected to automate up to 300 million jobs in the United States and Europe, according to investment bank Goldman Sachs, while simultaneously creating 97 million new roles that require advanced technical skills, as predicted by the World Economic Forum. This shift is not just about job displacement; it’s about job evolution. Companies adopting AI are planning to expand their workforce, with 91% of firms integrating AI aiming to increase their employee numbers by 2025.

2. Cloud Computing

Cloud computing skills will remain in high demand, as the industry continues its explosive growth and transformation of business operations across sectors. Gartner forecasts global end-user cloud spending to reach $723 billion in 2025, a 21.5% increase from the previous year.

The rise of generative AI and the need for integrated platforms are accelerating cloud adoption, with 90% of organizations projected to have hybrid cloud deployments by 2027. As organizations continue to migrate their applications and workloads to the cloud, with 48% planning to move at least half of their applications within a year, proficiency in cloud computing will be crucial for professionals looking to stay relevant in the rapidly evolving job market of 2025.

3. Cybersecurity

Cybersecurity skills are highly coveted, as the digital landscape faces unprecedented threats and skyrocketing costs associated with cybercrimes. By 2025, global cybercrime costs are projected to reach a staggering $10.5 trillion annually, according to a report by Cybercrime Magazine.

This surge in cybercrime is accompanied by a severe shortage of qualified professionals in the field. The cybersecurity job market is expected to grow by 33% between 2023 and 2033, with an estimated 3.5 million unfilled cybersecurity positions worldwide by the end of 2025. This talent gap is further exacerbated by the rapid evolution of cyber threats, with encrypted threats increasing by 92% in 2024 and malware rising by 30% in the first half of the same year.

4. Data Analysis

Businesses are increasingly relying on transforming unstructured data into actionable insights to drive growth, improve user satisfaction and maintain a competitive edge in the market. The demand for data analytics expertise is surging across industries, with trends like AI-enhanced analytics, natural language processing and advanced data visualization reshaping how organizations leverage their data assets.

As organizations grapple with the challenges of data quality and governance, professionals skilled in ensuring data integrity and implementing effective data strategies will be in high demand, making data analysis an essential skill.

5. Digital Marketing

In today’s digital landscape, businesses are leveraging online social platforms to connect with and engage their target audiences and customers.

With global digital ad spending projected to surpass $740 billion in 2024, and over 5 billion social media users worldwide, proficiency in digital marketing strategies will be crucial for professionals looking to thrive in the competitive job market.

Feature Image Credit: Getty

By Jack Kelly

Follow me on Twitter or LinkedIn. Check out my website or some of my other work here.

Jack Kelly has been a senior contributor for Forbes since 2018, covering topics in career development, job market trends and workplace dynamics. His articles often focus on practical advice for job seekers and employees, as well as covering the latest news impacting workers so they can make informed decisions about their careers. Read More

Sourced from Forbes

By

Behind the pay-as-you-go pricing model, the public cloud is teeming with the latest and greatest development, devops, and AI tools for building better and smarter applications faster.

When we think of the public cloud, often the first consideration that comes to mind is financial: Moving workloads from near-capacity data centres to the cloud reduces capital expenditures (CapEx) but increases operating expenditures (OpEx). That may or may not be attractive to the CFO, but it isn’t exactly catnip for developers, operations, or those who combine the two as devops.

For these people, cloud computing offers many opportunities that simply aren’t available when new software services require the purchase of new server hardware or enterprise software suites. What takes six months to deploy on-premises can sometimes take 10 minutes in the cloud. What requires signatures from three levels of management to create on-prem can be charged to a credit card in the cloud.

It’s not just a matter of time and convenience. The cloud also enables higher velocity for software development, which often leads to lower time to market. The cloud can also allow for more experimentation, which often leads to higher software quality.

In addition, there are real innovations in the cloud that can provide immediate benefits and solve long-standing problems with on-premises computing. Here we present 16 compelling cloud capabilities.

Compute instances on demand

Need a new database on its own on-premises server? Get in line, and prepare to wait for months if not years. If you can tolerate having an on-prem virtual machine (VM) instead of a physical server and your company uses VMware or similar technologies, your wait might only take weeks. But if you want to create a server instance on a public cloud, you can have it provisioned and running in about 15 minutes – and you’ll be able to size it to your needs, and turn it off when you’re not using it.

Pre-built virtual machine images

Being able to bring up a VM with the operating system of your choice is convenient, but then you still need to install and license the applications you need. Being able to bring up a VM with the operating system and applications of your choice all ready to run is priceless.

Serverless services

Serverless” means that a service or piece of code will run on demand for a short time, usually in response to an event, without needing a dedicated VM on which to run. If a service is serverless, then you typically don’t need to worry about the underlying server at all; resources are allocated out of a pool maintained by the cloud provider.

Serverless services, currently available on every major public cloud, typically feature automatic scaling, built-in high availability, and a pay-for-value billing model. If you want a serverless app without being locked into any specific public cloud, you could use a vendor-neutral serverless framework such as Kubeless, which only requires a Kubernetes cluster (which is available as a cloud service; see below).

Containers on demand

A container is a lightweight executable unit of software, much lighter than a VM. A container packages application code and its dependencies, such as libraries. Containers share the host machine’s operating system kernel. Containers can run on Docker Engine or on a Kubernetes service. Running containers on demand has all the advantages of running VMs on demand, with the additional advantages of requiring fewer resources and costing less.

Pre-built container images

A Docker container is an executable instance of a Docker image, which is specified by a Dockerfile. A Dockerfile contains the instructions for building an image, and is often based on another image. For example, an image containing Apache HTTP Server might be based on an Ubuntu image. You can find pre-defined Dockerfiles in the Docker registry, and you can also build your own. You can run Docker images in your local installation of Docker, or in any cloud with container support. As with pre-built virtual machine images, a Dockerfile can bring up a full application quickly, but unlike VM images Dockerfiles are vendor-agnostic.

Kubernetes container orchestration

Kubernetes (K8s) is an open source system for automating deployment, scaling, and management of containerized applications. K8s was based on Google’s internal “Borg” technology. K8s clusters consist of a set of worker machines, called nodes, that run containerized applications. Worker nodes host pods, which contain applications; a control plane manages the worker nodes and pods. K8s runs anywhere and scales without bounds. All major public clouds have K8s services; you can also run K8s on your own development machine.

Auto-scaling servers

You don’t have to containerize your applications and run them under Kubernetes to automatically scale them in the cloud. Most public clouds allow you to automatically scale virtual machines and services up (or down) as driven by usage, either by adding (or subtracting) instances or increasing (or decreasing) the instance size.

Planetary databases

The major public clouds and several database vendors have implemented planet-scale distributed databases with underpinnings such as data fabrics, redundant interconnects, and distributed consensus algorithms that enable them to work efficiently and with up to five 9’s reliability (99.999% uptime). Cloud-specific examples include Google Cloud Spanner (relational), Azure Cosmos DB (multi-model), Amazon DynamoDB (key-value and document), and Amazon Aurora (relational). Vendor examples include CockroachDB (relational), PlanetScale (relational), Fauna (relational/serverless), Neo4j (graph), MongoDB Atlas (document), DataStax Astra (wide-column), and Couchbase Cloud (document).

Hybrid services

Companies with large investments in data centres often want to extend their existing applications and services into the cloud rather than replace them with cloud services. All the major cloud vendors now offer ways to accomplish that, both by using specific hybrid services (for example, databases that can span data centres and clouds) and on-premises servers and edge cloud resources that connect to the public cloud, often called hybrid clouds.

Scalable machine learning training and prediction

Machine learning training, especially deep learning, often requires substantial compute resources for hours to weeks. Machine learning prediction, on the other hand, needs its compute resources for seconds per prediction, unless you’re doing batch predictions. Using cloud resources is often the most convenient way to accomplish model training and predictions.

Cloud GPUs, TPUs, and FPGAs

Deep learning with large models and the very large datasets needed for accurate training can often take much more than a week on clusters of CPUs. GPUs, TPUs, and FPGAs can all cut training time down significantly, and having them available in the cloud makes it easy to use them when needed.

Pre-trained AI services

Many AI services can be performed well by pre-trained models, for example language translation, text to speech, and image identification. All the major cloud services offer pre-trained AI services based on robust models.

Customizable AI services

Sometimes pre-trained AI services don’t do exactly what you need. Transfer learning, which trains only a few neural network layers on top of an existing model, can give you a customized service relatively quickly compared to training a model from scratch. Again, all the major cloud service providers offer transfer learning, although they don’t all call it by the same name.

Monitoring services

All clouds support at least one monitoring service and make it easy for you to configure your cloud services for monitoring. The monitoring services often show you a graphical dashboard, and can be configured to notify you of exceptions and unusual performance indicators.

Distributed services

Databases aren’t the only services that can benefit from running in a distributed fashion. The issue is latency. If compute resources are far from the data or from the processes under management, it takes too long to send and receive instructions and information. If latency is too high in a feedback loop, the loop can easily go out of control. If latency is too high between machine learning and the data, the time it takes to perform the training can blow up. To solve this problem, cloud service providers offer connected appliances that can extend their services to a customer’s data centres (hybrid cloud) or near a customer’s factory floors (edge computing).

Edge computing

The need to bring analysis and machine learning geographically close to machinery and other real-world objects (the Internet of Things, or IoT) has led to specialized devices, such as miniature compute devices with GPUs and sensors, and architectures to support them, such as edge servers, automation platforms, and content delivery networks. Ultimately, these all connect back to the cloud, but the ability to perform analysis at the edge can greatly decrease the volume of data sent to the cloud as well as reducing the latency.

The next time you hear grief about your cloud spending, perhaps you can point to one of these 16 benefits – or to one of the cloud features that have helped you or your team. Any one of the cloud innovations we’ve discussed can justify its use. Taken together, the benefits really are irresistible.

Feature Image Credit: Deyan Georgiev / Shutterstock

By

Sourced from InfoWorld

Edge computing and public clouds exist with synergy and codependence. This is the de facto model going forward.

We define edge computing as the ability to place some amount of processing and data near the sources of the data as well as near the systems or humans that need quick access to the processing.

It’s a simple idea, and certainly nothing new. However, the popularity of edge computing continues to gather steam as we move more systems to centralized public clouds and modernize related applications and data stores.

As a result of this migration, we now recognize that not all modernized applications and data stores should only exist in a central location. Thus, the ‘new’ option of moving them to the realm of edge computing, specifically, to the edge of public clouds.

Much of the initial confusion with edge computing came from erroneous messaging from the tech press (and even from some companies) that edge computing was a replacement for cloud computing and other notions that were incorrect at their core.  Yes, there are questions that need to be answered when any new hyped technology concepts hit the technology zeitgeist. However, once when we understood the concepts of edge and cloud computing in context of each other, the patterns of synergy began to emerge.  Hopefully the confusion will continue to subside.

The Edge of What?

What drove the concept of edge computing was the rise of IoT and other technologies that are distributed to be optimized for the systems and humans that leverage them.

For example, it doesn’t make sense for a self-driving automobile to send all data and requests for data processing over a cellular network to some centralized system in a public cloud. The only way self-driving cars will work is if they can maintain data and processing at the edge, meaning, in the car.  This allows the data and processing to occur with little or no network latency, providing fast enough reactions that you don’t hit a tree.

However, edge is not just for devices anymore.  Edge clouds are now an option for those who want to have a small cloud instance in their data centre. This allows local processing and data storage with much less latency than if the data and processing requests were sent one thousand miles away to a public cloud server that is shared with hundreds of other tenants.

The idea is to keep some but not all public cloud services on the edge clouds while still supporting a symbiotic relationship with edge clouds and their public cloud overlords.  They can work together as needed for storage and processing, sharing data and processing tasks. System developers have the option to deploy data and applications on the edge cloud, the public cloud, or within applications and data sets that are divided between the two.

Microsoft’s Stack and AWS’s Outpost are the best examples of edge clouds.  However, other smaller cloud providers have exploited the desire for some enterprises to leverage edge clouds as well.  The larger cloud players often look at edge clouds as a path to their public clouds, which typically have more services and benefits.  However, some enterprises will opt for edge cloud over public clouds ongoing.

Beyond edge clouds and edge devices we have:

  • Edge sensors, where data is usually consumed through a triggering event. For example, your smart thermostat might send a text to your phone when it’s time to change your filter.
  • Edge branches maintain their own set of compute and data storage functions. For example, banks may leverage this model to support a remote branch that independently uses local systems that don’t require constant interaction with centralized systems or public clouds.
  • Edge enterprises, like edge branches, allow independent systems to exist within parts of a larger geographically distributed enterprise. For example, an edge enterprise system can support a European office with special data and processing that’s specific to the local office’s country and location.
  • Edge datacentres are smaller datacentres that exist to support a geographic region. This edge segment saw strong growth since the pandemic started, with employees working from areas that needed a closer datacentre to support their remote activities.

Cloud Computing vs. Edge Computing

Right now, the confusion seems to centre around how cloud and edge computing are supposed to coexist. Reality check: Edge computing typically means the edge of public clouds. You can partition the processing and data between the public clouds and edge-based systems in such a way that the each carries out a role that each does best.

Consider our self-driving car example above. The device within the car should do some immediate processing, such as figuring out that you’re headed for a tree and take immediate evasive actions so you’re not killed. Also important but less critical, the edge device may share massive amounts of engine data with systems on back-end public clouds servers that can proactively determine maintenance issues. They may leverage more process-intensive services such as AI and deep data analytics to find and match patterns to ensure that you’re not stranded on the side of the road with a mechanical breakdown.

The idea is that each tier—the edge tier and the cloud tier—carry out their own sets of functions that are proper for each tier. The cloud takes on tasks that require large amounts of storage, processing, and even spiralized services such as AI, analytics, and pattern matching.  The edge device does tasks that don’t require excessive processing and data storage but need to provide an immediate response with limited or no latency. Together, the edge and cloud systems form a single unified system with edge and cloud components that are purpose built to be hosted on an edge or cloud platform.

The Edge on Public Cloud

The public cloud providers saw this coming a mile away. All major cloud providers offer edge development and deployment services, including those that use container, serverless, and other technologies developed for clouds as well as those developed for edge computing.

Public cloud providers can manage deployments to edge-based systems, and even maintain digital twins for edge-based devices and systems. This allows you to maintain versions of applications and data for testing and deployment that run on most types of edge systems.

Public cloud-based edge development and deployment systems can even handle versioning, configuration management, and other functions related to dealing with a massive amount of distributed edge-based systems. This supports most of the edge computing models listed above such as enterprise, device, edge cloud, and edge datacenters.

Yes, edge computing is many different things, but most paths lead back to public cloud computing. The edge needs to be at the edge of something. In most cases, it’s at the edge of a public cloud. Edge computing and public clouds exist with synergy and codependence.  This is the de facto model going forward.

By David Linthicum

David Linthicum is the Chief Cloud Strategy Officer at Deloitte Consulting.

Sourced from eWeek

Sourced from IOL

KEYNOTE speakers provided insight into how technology was transforming travel, at the TravelPort LIVE Africa conference in Hermanus last week.

Mike Croucher, Travelport’s chief architecture, spoke about digitally reimagining travel.He pointed out:

What today’s hyperconnected travellers want and what they value have changed. While cost, choice and convenience are still significant, booking decisions are now based on the experience.

From the moment a traveller thinks about a trip to planning it, booking it and living it we, in the travel industry, must deliver a convenient, personal, all-encompassing experience.

Competition is fierce. Disruptive businesses like Airbnb and Uber, adept at delivering new inspirational experiences, have torn down long-standing monopolies and eroded brand loyalty.

What makes it more than just a trip?

The Internet of Things: 

The IoT relates to the interconnection, via the internet, of computing devices that are embedded in everyday objects required to send and receive data at speed. Human beings, however, do not interact directly with the IoT. Instead, we have a mobile device, through which we can digitally exchange information and personalise experiences. This could be adjusting the temperature in a hotel room or pre-ordering room service before arrival.

Mobile:

According to the GSMA, more than two-thirds of the world’s population, 5billion people, are connected to a mobile service. South Africa’s research conducted with 11000 respondents from 19 countries revealed just how vital cellphones are for travellers.

Not only do 33% of travellers book their trips on a mobile device, but 62% also say digital boarding passes and e-tickets make travelling easier and 46% say a good digital experience is important when choosing an airline. The mobile acts as a travel companion. From searching to returning, it determines the traveller’s experience of and the overall journey. It offers a means of continuous, one-on-one engagement, enabling different offers and the availability of services to be tailored to an individual’s preferences. To do this, a mobile device needs intelligence.

Artificial Intelligence (AI):

AI can unlock insights to create the personalised experience. It allows businesses to become more proactive and strategic through predictive capabilities – that is recommendation engines that suggest the best time to buy a flight, book a hotel and so on. By informing a travel AI, training it and providing it with access to extensive real-time data sets, opportunities to deliver frictionless experiences become seamless.

Big Data:

The way we share, analyse and absorb information through technology has exploded to the point where big data’s usage is commonplace. Aside from the benefits of shaping individual travellers’ experiences, businesses can leverage data to better understand what is/isn’t working. Data is the fuel that powers 21st-century commercial intelligence.

In the travel industry, by analysing a complex set of data points like travel history and demographics, predictive analytics can plot travellers’ next moves before they know what they are themselves. To use the data, we need access to significant quantities of computing power. Some of this can be provided by cloud-based infrastructure.

Cloud computing:

Cloud computing technology provides the infrastructure to compute vast amounts of data quickly, affordably and on demand. It is the glue that holds the travel industry together by enabling data and content to be moved with relative ease, as well as computed and delivered as close to the point of consumption geographically as possible.

What does the future hold?

“We should be excited about what the future of the travel industry holds,” Croucher says. “In the Fourth Industrial Revolution, delivering the right kind of travel experience is going to rely on practically applying the technologies described here. The onus falls on us to be enterprising enough to grasp the opportunities.”

Sourced from IOL

By Alan R. Earls

To reap the benefits, a cloud migration requires a well-thought-out IT strategy. But some enterprises continue to make the move too soon — and then pay a price.

Cloud first is the new orthodoxy, and with good reason. Companies large and small are moving assets off site and into the cloud, driven by financial incentives and often enticed by the chance to access leading-edge technology without having to staff up or make large hardware investments.

till, not every cloud has a silver lining. Anecdotal evidence crops up now and again that shows some organizations just aren’t finding cloud nirvana. Some, indeed, say they have brought resources back into traditional data centers or have built a private cloud environment.

For some, cloud computing has not worked — or not worked well. The reasons, experts say, are usually both narrow and specific, involving situations that couldn’t easily fit a public cloud model. Their message: If you understand where you are going, your requirements and the capabilities of the cloud, your cloud deployment will probably succeed.

Marc Clark, director of cloud strategy and deployment at Teradata Corp., a data warehouse and service provider, based in Dayton, Ohio, said he has grown accustomed to hearing cloud laments, but it’s usually because people wandered into a cloud-first policy without adequate preparation.

“I can’t tell you the number of times I hear the phrase, ‘Because we have a cloud mandate,’ when people are asked why they are moving to the cloud,” he said.

Surprisingly, many people aren’t able to explain why they have a cloud-first policy, Clark said. But most, undeterred by the fact that they don’t actually have a clearly understood reason for moving to the cloud, march on anyway.

“Even worse than moving somewhat blindly to the cloud, I often hear of companies having a single public cloud provider strategy,” Clark said. The attraction to standardizing on a single provider is an effort to reduce financial and procurement complexity. The problem, he argued, is that public clouds were not built to handle every situation and application equally.

deploying new applications in the cloud

For example, some business applications are especially compute-hungry and can stress an infrastructure. “I have seen many examples of such workloads that simply don’t fit into the cloud,” he said.

In one instance, a major U.S.-based manufacturer that had been using an on-premises enterprise data warehouse for years was faced with a new cloud mandate to move to a specific public cloud provider. Since the company had moved web applications and email onto the public cloud, the manufacturer thought that the process of migrating the data warehouse would be no different. After several weeks of testing, it became obvious that the public cloud just wasn’t going to give the company’s users the performance and reliability that they were used to, he said.

In fact, according to a 2017 cloud preparedness report from Forrester Research, technology factors, such as an organization’s experience with virtualization and DevOps, seem to correlate with success in adopting a cloud-first policy. Also critical, according to the report, is conducting an application portfolio assessment. This helps organizations clarify which applications are most appropriate for cloud and also helps highlight “the critical role of microservices and containers in modern service delivery,” the authors wrote.

Samir Shah, chief of staff at Baselayer Technology, a modular data center and management software provider, based in Chandler, Ariz., said he has seen other examples of poor or inadequate planning on the path to a cloud-first policy.

“The trend across the enterprise landscape is for CIOs to outline a strategy to move or migrate to the cloud,” Shah said. When it’s time to act, however, the unanticipated challenges and complexities come into focus. “We’ve seen public cloud migrations where expenses escalate upward post-launch based on unanticipated utilization.”

“There are potential reasons for disappointment with a cloud move, and the primary one is cost,” Gartner analyst Arun Chandrasekaran said.

A few years ago, I would have said cloud is not mature enough. But now, cloud platforms like [Microsoft] Azure or [Amazon Web Services] are very sophisticated.

Rohit Lonkardirector of industry relations and Azure solutions, Saviant Consulting

Usually the problem is that planners fail to correctly estimate their needs. But, Chandrasekaran said, sometimes it is a matter of not having the right procedures and governance in place, as when unnecessary instances are created and never shut down.

Some businesses simply look at spending and scale and decide they can bring the processing back on premises.

“That’s what Dropbox decided to do last year,” Chandrasekaran said. The cloud-based file storage provider chose to build a private cloud rather than staying with Amazon Web Services. The move by Dropbox and some others to shift to private cloud was not based on price alone, Chandrasekaran said. Sometimes there’s the desire to control and enhance features.

“In the public cloud, you get what others get; you will not get a specialized infrastructure,” he said. In certain situations, having an infrastructure tailored to your applications and uses provides a competitive advantage, he explained. And a private cloud can sometimes better address security, privacy or compliance requirements, especially regarding geographic location of data.

Still, with the right processes and tools, a business can reduce its public cloud spending.

And despite those occasional concerns about cost or performance, cloud adoption continues to gain momentum. Some businesses, though, choose to put only some of their resources in the cloud, presumably requiring either continued investment in on-premises systems or in a private cloud environment.

The path to cloud nirvana

“A few years ago, I would have said cloud is not mature enough. But now, cloud platforms like [Microsoft] Azure or [Amazon Web Services] are very sophisticated,” said Rohit Lonkar, director of industry relations and Azure solutions at Saviant Consulting, based in Mumbai, India. In general, when you need high-powered infrastructure or need to respond to demand peaks, cloud offers a much better option than on-premises resources, he said.

“Most cloud platforms come with access to analytics,” Lonkar said. With those capabilities, a business can properly monitor and manage its activities.

The only real exceptions, in his view, are when a company feels strongly that it needs to completely control its data, as with a financial institution, perhaps. Then, on-premises computing may be a better option.

Not every company’s data infrastructure needs are the same, Shah noted, so there is no one-size-fits-all answer in the cloud. To avoid some of the potential pitfalls that occur when implementing a cloud-based IT strategy, Shah recommended applying a dual approach.

“IT executives should evaluate a private cloud on premises or in a shared facility,” he said. In many cases, this could be deployed using modular data center technology on the edge or securing space in a colocation facility. Executives could also consider a hybrid cloud strategy, which would include public or private cloud assets in conjunction with a brick-and-mortar data center.

“One of our partners, a top five financial investment bank, elected to add data capacity via a colocation facility to host a private cloud in order to manage their data-security needs,” Shah said. “But they still utilize the public cloud for data that does not maintain the same security requirements.

“The key is to develop a cloud-based strategy that allows for proactive decision-making instead of reactive decision-making,” Shah said. “Keep in mind that the software supporting a company’s data infrastructure is critical to ensuring this proactive mindset.”

By Alan R. Earls

Sourced from TechTarget Search Cloud Computing

By Patrick R.

Since the last couple of years, cloud industry has gone through a drastic change with the evolution of serverless computing.

In 2006, the basic concept was introduced by Zimki by launching the first “pay as you go” code execution platform. However, the credit of its dynamic commercialization goes to Amazon Web Services (AWS), as the company has unveiled ‘AWS Lambda’, a serverless computing technology, in 2014.

AWS comes first when we talk about cloud services. It almost dominates the market due to its reliable and effective Cloud service offerings. But it’s not the case in serverless computing. Other tech giants such as Microsoft, Google, IBM, etc. have already started exploring this newly evolved cloud technology. In comparing to others, Microsoft holds the second position for serverless cloud infrastructure after AWS by introducing a quite similar type of services as AWS Lambda.

Certainly, this race will be at its peak in the very near future, as all the leading cloud service providers are rushing to offer serverless computing. Wondering why it is so? What is serverless computing? What are the advantages and disadvantages of the technology? In this article, we have tried to cover all the essential aspects of serverless computing. Here’s everything you need to know.

What Is Serverless Computing?

It is the next generation cloud computing technology that is also popular as Function-as-a-Service (FaaS). The word ‘serverless’ doesn’t stand for No Servers. Serverless computing is an event driven application design and deployment paradigm where all the computing resources are provided as scalable cloud services.

In traditional cloud computing, it is compulsory for the organizations to pay a fix and recurring amount to run their website and application either they use all instances or not. Opt for serverless computing and pay only for the services or instances you have used. No charges for downtime and idle time.

Applications are event triggered and run in stateless compute containers under serverless computing. Once the application functions are ready, developers have to rely on infrastructure to allocate resources and execute functions. With the increasing load of the functions, the infrastructure will recreate copies of function and scale to meet demand.

Serverless computing is an extension of micro-services. The serverless architecture is divided into specific core components. Micro-services group the similar functionalities into one service while serverless computing defines functionalities into finer grained components. Developers create a custom code and execute it as autonomous and isolated functions that run in stateless compute services.

Best Servers in Serverless

AWS Lambda

The most preferred and functionally sound serverless cloud platform that permits companies to run their applications without managing or provisioning the server. Enjoy the cost effectiveness by paying for what you use. No cost will occur when your code is not running. Lambda allows to run the code for virtually any type of backend service or application with zero administration. Organizations can also scale the Lambda resources as per their specific requirements.

All you need to do is upload the code; the rest will be taken care by Lambda. It holds the capability to scale and run the code with high availability. Using Lambda, organizations can build advanced data processing (real-time file processing, real-time stream processing, data validation, filter, and sorting) and backends (IoT Backends, Mobile Backends, web applications).

Adoption of AWS Lambda enables developers can focus on application logic rather than worrying about how many instances of what kind of services to spin up. They can leave all architectural details on the cloud service providers and easily reacts to external events.

Azure Functions

It’s quite similar type of service as AWS. The Azure Functions service allows developers to run an event based serverless compute to improve the development. Under the Azure Functions service, the organizations can have timer based processing, SaaS event processing, serverless web application architecture, Azure service event processing, real-time stream processing, serverless mobile backends, and real-time bot messaging.

The Azure Functions service helps developers to handle various tasks that respond to an event. It is well suited to the Internet of Things, so that the older systems can easily connect to the cloud without having hardware replacement.

It allows developers to scale out instances to meet the changing needs of organizations. Here, organizations need to pay for only the time period when functions run. Azure Service Fabric is another service that automatically scales different micro-services running in the cloud. Microsoft itself uses this service for its internal applications.

Pros & Cons

Upsides

Cloud service providers always look for a better management of server resources. Serverless computing seamlessly addresses this need by allowing developers to perform the dynamic and changing work without worrying about on which server they need to perform the task. Serverless computing manages all the resources on behalf of the developers.

The technology is not priceless for the long term. Cloud service providers manage the resources as per organization’s needs. Serverless computing enables cloud providers not to allocate resources that organizations don’t need. It will be a highly cost-effective approach for companies and cloud service providers if they perform their job correctly.

Downsides

Serverless computing is a brilliant concept to scale the cloud resources as per requirements. However, the technology contains few performance issues. No doubt, it gives greater flexibility to the organizations in how the compute resources respond to the changing needs of the application. Well, if you are seeking for an extremely good performance of your application, then it is preferable to go for the virtual server instead of choosing a serverless approach.

As it is one of the emerging technologies, debugging and monitoring of serverless computing is quite complicated. Both of these activities are quite difficult to perform on an application, as it is not using a single server resource.

Wrap Up

Adopt the serverless computing to run your business to get advantages of advanced scalability, flexibility, and affordability. Amazon Web services and Microsoft Azure have already started offering these services. We can expect other cloud market leaders like Google also facilitate organization with the similar services in very near future.

Intuz is an AWS certified cloud service provider that help organizations in accelerating their business automation and simplification by applying latest AWS services. We have a team of AWS-trained professionals who hold in-depth knowledge about the AWS Lambda, the serverless cloud. Want to implement serverless computing of AWS (Lambda)? Don’t feel hesitant to get in touch with our AWS experts.

Thanks for reading!

Originally published at intuz.com

By Patrick R

Passionate entrepreneur with over 12 years of experience in Information Technology. Techno-commercial leader heading Intuz as Director.

Sourced from Medium