Pinterest has a lot to offer both beginner and experienced photographers. Here’s how you can make the most of this undercover social media site.
Pinterest is a network where many people seek inspiration for their own creative ideas, and it’s also a spot where those with an open mindset can successfully promote their photography skills.
Whether you work with clients or create your own products (or both), you’ve got plenty of opportunities to use Pinterest effectively. In this guide, you will discover seven possible things you can try when using the platform.
1. Finding Inspiration From Other Photographers
If you’ve ever used Pinterest in a smaller capacity before, it was probably to find inspiration. It’s a great place to look for ideas related to travel, food, lifestyle, and much more. And whether you’re a beginner to photography or a seasoned professional, Pinterest can help you advance your skills even further.
When using Pinterest, you can find inspiration from other photographers in your field. For example, you can look for new locations to capture in your local area or potential ways to place items in a photoshoot if you want to improve your product photography.
Pinterest is also useful if you want to look at new aesthetics to adjust your editing style. You can then advance your editing skills in Lightroom once you’ve saved some ideas.
2. Learning More About Photography as a Craft
Finding inspiration from other photographers is one way to get better at your craft, but it’s not necessarily the most efficient way to do so. For many people, the best way to become a better street photographer (or excel in any genre, for that matter) is to absorb information and put it into action.
Pinterest might not be the first place that comes to mind when thinking about where you can learn more about photography. However, it’s an excellent place to find practical resources, such as blog posts, YouTube videos, and podcast episodes.
Many creators publish pins of what they share elsewhere on Pinterest, and you’ll also find the outbound link here. You can use the search bar on Pinterest to find topics (such as “photography tips”). When you see something that you want to refer to later, save it as a pin within a mood board.
If you enjoy using Pinterest, and your favourite browser is Firefox, consider trying these Pinterest add-ons for Firefox.
3. Drive Bookings for Photoshoots
For many photographers, photoshoots are a popular income stream. In addition to being skilled at your craft, you also need to market your services to your target audience. Social media in general is ideal for doing that, and Pinterest can help you get your work in front of others who may wish to do business with you.
When using Pinterest as a photographer, you can share your best work in separate pins. To bring all of these together, consider creating a board that others can access. Each time you share a pin that includes a portfolio piece, you can provide a brief description of what you do. If a user likes what they see, they can click on the link you include and book a shoot directly with you.
If you use pins for photoshoot bookings, it’s worth mentioning the locations you’re willing to work in. Giving others an idea of where you’re based is also wise.
Although many photographers begin their careers with client work, running a business has the beauty of opening your eyes to plenty of other income-generating opportunities. It’s not uncommon for experienced photographers to launch their own products later on, including selling prints and photography books.
When promoting your photography services, you’ll need to build awareness around the products in your store if you want to maximize your earning potential. Pinterest is a handy place for doing precisely that.
To help promote products from your online store, you should think about what the end result is for the buyer. For example, if you create and sell Lightroom presets, showcase what a picture looks like before and after applying it. Similarly, if you sell prints, you can show how the photo looks when it appears on someone’s wall
5. Finding Other Local Photographers
Although many people have a camera on their smartphone these days, being truly passionate about photography—to the point of learning more about the craft—is a lot more rare. As a result, you may find your life as a photographer quite lonely if you don’t make an effort to meet like-minded people.
There are many ways that social media brings people together, and one of its greatest benefits is that you can find people living near you with related interests. If you’re looking for other photographers, you can use Pinterest in a similar way to Instagram by typing your location followed by “photographer” in the search bar.
When you see a photographer that interests you, you can send them a message on Pinterest. Alternatively, see if they have any other social media accounts and contact them on those.
6. Asking Clients to Find Photoshoot Styles That Interest Them
Like selling a service in any other business, it’s essential that you understand what your client wants from a photoshoot. The best way to do this is by asking in advance—or even better, asking your client to provide examples of what they’re looking for.
You can tell clients to create a board with different ideas on Pinterest, and once they’ve done this, they can provide you with the link. Alternatively, you can create a board of ideas you’ve got and send your proposals to see if the other party is happy.
When creating boards for this phase, you can always make them private if you want. You can find out how to use secret boards on Pinterest in a separate guide.
7. Promoting Pins via Advertising
Like many other social media networks, Pinterest offers an avenue for paid advertising if you want to get in front of more people. Of course, you’ll need a budget to use Pinterest advertising—but once you’re established, you may find that this is a useful option for getting in front of more users.
When advertising on Pinterest, make sure you understand who you’re trying to reach with your photography. Moreover, you should set a clear limit on how much you’re willing to spend in advance.
Pinterest: A Powerful Platform for Photographers
Pinterest has several uses for photographers, and it can help both those who work with clients and photographers who sell products. Even if you’re only in your early days as a photographer, you can find significant value in using Pinterest to discover different photography styles and resources.
Now that you’ve read this guide, you should have a better idea of how you can make the most out of this still-underrated social media platform. Why not experiment with each of these tips?
Danny specializes in writing about productivity and also enjoys exploring different creative disciplines, especially photography. He has a degree in Sports Journalism and has been writing professionally since 2016.
Have you seen the new Adobe *logo*? It appears in promotional videos introducing Adobe’s Photoshop and Illustrator with generative artificial intelligence, and it looks more luscious than a doughnut at a gluten-free convention.
Instead of using the classic red “A” graphic mark that Adobe users know from the company’s arsenal of creative tools, the videos show a new design: A sleek, slanting “A” that precedes the letters “dobe,” which are set in a bold typeface. It looks so cool that I thought Adobe had at last updated its brand.
Except, well, it’s not the new Adobe logo. According to Heather Combs, Adobe’s head of brand strategy and customer insights, it’s not a logo at all. Rather, she says, it’s a wordmark. “It’s not replacing the logo; it’s just a new expression in the way that we are creating the way our brand shows up in the world,” she says.
Combs popped my happy bubble over a video chat last week, after I inquired about the new look, casually (and hopefully) asking if this was the dawn of a new branding era at Adobe. Combs emphatically told me that this glorious new mark, seen across Adobe promotional videos, isnot a logo; it’s simply a new part of its brand tool kit. “We’ve had a logo for a long time. A beautiful logo, which is still our logo moving forward. We’ve just added a new piece to the tool kit so we can express the brand in multiple ways,” she says.
[Images: Adobe]
Combs is referring to the corporate logo that illustrator Marva Warnock (wife of the company’s cofounder John Warnock), designed in 1982 for Adobe’s launch. Warnock’s design featured a slanted “A” whose crossbar formed an open triangle. Though Adobe updated Warnock’s design in 1990 to the red, white, and black logo still used today, the company has had a shockingly consistent visual language from its earliest days. Which is why a new logo would have been so exciting.
Combs’ distinction between logo and wordmark rests on intention. A wordmark can be a logo. Just look at Coca-Cola, Google, and FedEx—designs that are made with letters set in a typeface. A logo—a term that comes from Greek for “word”—is generally understood by most humans to be anything that represents a brand. That can be a graphic mark (Apple’s bitten apple), a logotype (IBM), or a combination of both, sometimes called a graphical logo (Nike, Target, and Unilever).
In the case of Adobe, this wordmark isn’t the official corporate logo, and therefore, by the rules of corporate branding 101, it cannot be considered a logo at all. Combs says the company toyed with its new design at last year’s Adobe Max conference, but there weren’t any “hard rollouts.” Not until these new generative AI promo videos, that is. It’s really a shame because this design is better than any logo refresh I can think of in recent memory.
The new Adobe wordmark is modern and futuristic, yet it looks so classic that it could have easily been Adobe’s first logo from the ’80s. Just look at the perfect way the classic “A” mark integrates with the “dobe” set in a heavy weight of the Adobe Clean typeface.
“We felt that we were missing a bit of a bolder version of Adobe Clean,” Combs explains. This led Adobe’s typography team to craft a new version that was bolder and took up “a bit more presence,” as Combs describes it. As a result of this exercise, Combs says they are “also looking at the weight of the corporate mark as well.” It seems like the right move because, right now, the official “Adobe” feels rather anaemic and more boring than beige.
[Image: Adobe]
Even though it’s not a logo, the new mark does feel like it’s coming at a pivotal time for the company as it enters its AI era. When I asked Combs if this wordmark was the herald of the new generative AI Adobe, she deflected, saying that while she could see different places where they would use the mark, she won’t confirm it as a mascot for Adobe’s generative-AI tools.
Still, she concedes: “With what we were doing around generative AI and Firefly, we wanted to be more on the edge. And I think we see the wordmark as being a very fresh, bold kind of way to express the brand. Again, not in replacement of the corporate mark, but a really interesting and intriguing way to express the brand.”
When I told her that the wordmark is awesome and perhaps it would confuse people who might mistake it as a new official logo, she says she doesn’t believe that’s the case. “Our objective is to put the Adobe brand in people’s mouths to make them think about us and want to talk about us,” she says. “If this new wordmark creates a point of interest, then it’s doing its job.”
Combs insists that Adobe has no plans to quietly swap its current logo to the new wordmark. She believes its merits are clear. “It’s not broken. Our corporate mark is widely recognized. It is something a lot of people see and feel very positive things about. And I want to continue to sort of build off of that,” she says. Though, she does admit, the wordmark is a bit more scalable to social and places where the current Adobe logo can feel a bit cramped. “This wordmark gives us a bit more of a compact way to express the brand,” she says.
Exactly! Combine that with the cool factor, the classic look, and the obvious tie into the new generative AI era, and you’ve got a rock-solid case to evolve the corporate brand. As an Adobe fan since Photoshop version 2, I sure hope that happens.
Unstructured text and data are like gold for business applications and the company bottom line, but where to start? Here are three tools worth a look.
Developers and data scientists use generative AI and large language models (LLMs) to query volumes of documents and unstructured data. Open source LLMs, including Dolly 2.0, EleutherAI Pythia, Meta AI LLaMa, StabilityLM, and others, are all starting points for experimenting with artificial intelligence that accepts natural language prompts and generates summarized responses.
“Text as a source of knowledge and information is fundamental, yet there aren’t any end-to-end solutions that tame the complexity in handling text,” says Brian Platz, CEO and co-founder of Fluree. “While most organizations have wrangled structured or semi-structured data into a centralized data platform, unstructured data remains forgotten and underleveraged.”
If your organization and team aren’t experimenting with natural language processing (NLP) capabilities, you’re probably lagging behind competitors in your industry. In the 2023 Expert NLP Survey Report, 77% of organizations said they planned to increase spending on NLP, and 54% said their time-to-production was a top return-on-investment (ROI) metric for successful NLP projects.
Use cases for NLP
If you have a corpus of unstructured data and text, some of the most common business needs include
Entity extraction by identifying names, dates, places, and products
Pattern recognition to discover currency and other quantities
Categorization into business terms, topics, and taxonomies
Sentiment analysis, including positivity, negation, and sarcasm
Summarizing the document’s key points
Machine translation into other languages
Dependency graphs that translate text into machine-readable semi-structured representations
Sometimes, having NLP capabilities bundled into a platform or application is desirable. For example, LLMs support asking questions; AI search engines enable searches and recommendations; and chatbots support interactions. Other times, it’s optimal to use NLP tools to extract information and enrich unstructured documents and text.
Let’s look at three popular open source NLP tools that developers and data scientists are using to perform discovery on unstructured documents and develop production-ready NLP processing engines.
“I think the most important tool for NLP is by far Natural Language Toolkit, which is licensed under Apache 2.0,” says Steven Devoe, director of data and analytics at SPR. “In all data science projects, the processing and cleaning of the data to be used by algorithms is a huge proportion of the time and effort, which is particularly true with natural language processing. NLTK accelerates a lot of that work, such as stemming, lemmatization, tagging, removing stop words, and embedding word vectors across multiple written languages to make the text more easily interpreted by the algorithms.”
NLTK’s benefits stem from its endurance, with many examples for developers new to NLP, such as this beginner’s hands-on guide and this more comprehensive overview. Anyone learning NLP techniques may want to try this library first, as it provides simple ways to experiment with basic techniques such as tokenization, stemming, and chunking.
“spaCy is a free, open-source Python library providing advanced capabilities to conduct natural language processing on large volumes of text at high speed,” says Nikolay Manchev, head of data science, EMEA, at Domino Data Lab. “With spaCy, a user can build models and production applications that underpin document analysis, chatbot capabilities, and all other forms of text analysis. Today, the spaCy framework is one of Python’s most popular natural language libraries for industry use cases such as extracting keywords, entities, and knowledge from text.”
Tutorials for spaCy show similar capabilities to NLTK, including named entity recognition and part-of-speech (POS) tagging. One advantage is that spaCy returns document objects and supports word vectors, which can give developers more flexibility for performing additional post-NLP data processing and text analytics.
Spark NLP
If you already use Apache Spark and have its infrastructure configured, then Spark NLP may be one of the faster paths to begin experimenting with natural language processing. Spark NLP has several installation options, including AWS, Azure Databricks, and Docker.
“Spark NLP is a widely used open-source natural language processing library that enables businesses to extract information and answers from free-text documents with state-of-the-art accuracy,” says David Talby, CTO of John Snow Labs. “This enables everything from extracting relevant health information that only exists in clinical notes, to identifying hate speech or fake news on social media, to summarizing legal agreements and financial news.
Spark NLP’s differentiators may be its healthcare, finance, and legal domain language models. These commercial products come with pre-trained models to identify drug names and dosages in healthcare, financial entity recognition such as stock tickers, and legal knowledge graphs of company names and officers.
Talby says Spark NLP can help organizations minimize the upfront training in developing models. “The free and open source library comes with more than 11,000 pre-trained models plus the ability to reuse, train, tune, and scale them easily,” he says.
Best practices for experimenting with NLP
Earlier in my career, I had the opportunity to oversee the development of several SaaS products built using NLP capabilities. My first NLP was an SaaS platform to search newspaper classified advertisements, including searching cars, jobs, and real estate. I then led developing NLPs for extracting information from commercial construction documents, including building specifications and blueprints.
When starting NLP in a new area, I advise the following:
Begin with a small but representable example of the documents or text.
Identify the target end-user personas and how extracted information improves their workflows.
Specify the required information extractions and target accuracy metrics.
Test several approaches and use speed and accuracy metrics to benchmark.
Improve accuracy iteratively, especially when increasing the scale and breadth of documents.
Expect to deliver data stewardship tools for addressing data quality and handling exceptions.
You may find that the NLP tools used to discover and experiment with new document types will aid in defining requirements. Then, expand the review of NLP technologies to include open source and commercial options, as building and supporting production-ready NLP data pipelines can get expensive. With LLMs in the news and gaining interest, underinvesting in NLP capabilities is one way to fall behind competitors. Fortunately, you can start with one of the open source tools introduced here and build your NLP data pipeline to fit your budget and requirements.
Navigating the tricky landscape of email marketing?
Email deliverability is your secret weapon to bypass the dreaded spam folder and land directly in your audience’s inbox.
Packed with actionable insights and techniques, our guide will transform your emails from avoidable spam to must-read content.
Ready to unlock your email potential and boost your marketing game?
Let’s jump in!
What is Email Deliverability (& Why it’s Important)?
In the labyrinth of email marketing, email deliverability stands as your secret weapon.
But what exactly does it entail?
At its simplest, email deliverability refers to the ability of your emails to land directly into subscribers’ inboxes.
Seems straightforward, doesn’t it? However, the subtleties in the concept beg for a deeper understanding.
Consider this… studies reveal that an acceptable email delivery rate hovers around 95%.
To break it down, for every 1000 emails you send, approximately 950 should hit the bullseye — the inbox.
But what about the elusive remaining 50?
These strays might find themselves lost in the wilderness of the spam folder, a destination dreaded by every email marketer.
Now, you might be asking yourself, why does email deliverability hold such weight?
The reason is quite simple…
Even the most creatively engaging email crafted with finesse amounts to naught if it doesn’t reach the recipient’s inbox. It’s akin to writing an excellent novel that no one reads.
Plus, understanding the importance of email deliverability sets a sturdy foundation for your email campaign.
Now, let’s delve into the tips…
12 Email Deliverability Tips That’ll Elevate Your Delivery Rate & Avoid the Spam Folder
Now that we understand the basics of email deliverability let’s discuss the actionable strategies you can adopt to avoid the spam folder and elevate your delivery rate…
1. Understand ISP Guidelines
Every Internet Service Provider (ISP), like Amazon SES, Google Workspace, or Constant Contact, has a unique set of guidelines that they use to determine what emails get delivered and what emails get marked as spam.
These guidelines often include technical aspects like IP and domain reputation, the relevance of your content, and your history of following CAN-SPAM laws. Ignoring these guidelines can result in a decrease in your delivery rate.
For instance, if you’re using Google Workspace, it’s crucial to be familiar with their bulk sender guidelines.
Google evaluates emails based on factors like authentication, list quality, and sending practices.
So, by tailoring your email marketing campaign in line with these guidelines, you can improve your chances of avoiding the spam folder.
2. Authenticate Your Emails
Email authentication is like a digital signature; it proves your emails originate from a legitimate source.
Techniques like Sender Policy Framework (SPF), DomainKeys Identified Mail (DKIM), and Domain-based Message Authentication, Reporting & Conformance (DMARC) play a pivotal role in maintaining your sender reputation.
With SPF, you can authorize your domain to send emails, effectively preventing spammers from using it.
DKIM, on the other hand, allows you to add a digital signature to your emails, validating your domain name identity.
And DMARC, a protocol that uses SPF and DKIM, helps protect your domain from phishing scams and spoofing.
When implemented correctly, these authentication techniques tell ISPs that you are who you claim to be, increasing the likelihood of your emails landing in the inbox.
3. Maintain a Good Sender Reputation
Maintaining a robust sender reputation is essential for high email deliverability. Your sender reputation is like a credit score; it demonstrates your trustworthiness as a sender.
Factors contributing to this reputation include your email volume, complaint rate, bounce rate, and whether or not you’ve been blacklisted in the past.
For instance, if your emails frequently bounce because the recipient’s addresses are incorrect or no longer in use, ISPs might see this as a sign of poor list management and flag your emails as spam.
Thus, it’s crucial to regularly check and clean your email list to maintain a good sender reputation.
4. Manage Your Email List
Your email list is a valuable asset. However, it’s not just about size; it’s about quality and engagement.
Regularly cleaning your email list of inactive subscribers, addresses that bounce, and spam traps can increase your sender score, which ISPs use to determine if your email is spam or not.
For example, if a user hasn’t opened your emails in the past six months, it might be worth removing them from your list or placing them in a re-engagement campaign.
Similarly, if an email bounces more than once, it’s best to remove it to prevent harming your sender reputation.
By managing your email list effectively, you not only improve deliverability but also ensure that your content reaches engaged and interested parties.
5. Monitor Email Deliverability
Monitoring your email deliverability is an integral part of your email marketing strategy.
These tools not only track your delivery rate and inbox placement but also identify issues in your emails that could trigger spam filters.
For example, they can evaluate your subject lines, content, and email list health, providing actionable feedback to improve your deliverability.
Regular monitoring and troubleshooting can help you understand your email performance better and address any potential issues early, ensuring your emails reach their intended inboxes.
6. Content Matters
Content is the heart of your email. It can either make your email a hit or a miss with ISPs’ spam filters.
When crafting your emails, avoid phrases that are commonly used in spam messages, such as “get rich quick,” “risk-free investment,” or “save up to.”
ISPs have advanced spam filters that are triggered by such phrases, which may lead to your emails landing in the spam folder.
Furthermore, avoid using a single large image in place of text. If your email is just one big image with minimal text, spam filters may become suspicious and flag it.
Instead, maintain a healthy text-to-image ratio and use ALT text for images, ensuring your message is still conveyed even if the images don’t load.
7. Personalize Your Emails
Personalized emails resonate more with your subscribers and have a lesser chance of being marked as spam.
Rather than sending the same generic message to everyone on your list, use personalization techniques to make your emails more relevant and engaging.
This could be as simple as addressing the recipient by their first name in the subject line or as complex as using behavioural data to customize the content to each recipient’s interests or past actions.
For instance, if a subscriber recently purchased a coffee maker from your website, you could send them an email with tips on brewing the perfect cup of coffee.
Not only does personalization improve engagement, but it also helps to build a deeper relationship with your subscribers.
8. Ask to be Whitelisted
Whitelisting is like having a VIP pass to your subscriber’s inbox. When a subscriber adds you to their email provider’s whitelist, it tells the provider that they want to receive your emails, significantly reducing the chances of your emails being marked as spam.
You can encourage subscribers to whitelist you by including a simple request in your welcome email or in the footer of your newsletters.
For example, “To ensure our emails always reach your inbox, please add our email address to your contacts or safe senders list.”
The process of whitelisting varies between email providers, so it’s helpful to include a link to a page with step-by-step instructions for different providers.
This proactive step can greatly enhance your email deliverability and ensure a consistent subscriber experience.
9. Respect the Unsubscribe
Honouring your subscribers’ wishes isn’t just about goodwill; it’s also a solid strategy for email deliverability. If a subscriber wants to leave, make it as straightforward as possible for them.
This may seem counterintuitive, but in reality, it’s better than getting a spam complaint which can tarnish your sender reputation.
Besides, an unengaged audience can harm your open and click-through rates. Implement a one-click unsubscribe option in a clear, easy-to-spot area in your email. Also, consider having an exit survey.
This optional feedback form can provide invaluable insights into why subscribers are leaving, helping you to improve and adapt your content strategy.
10. Use a Reputable Email Service Provider (ESP)
Your choice of ESP could make or break your email deliverability. A good ESP, not only offers robust infrastructure but also provides valuable advice and support to improve your email deliverability.
They manage relationships with ISPs and stay updated on the ever-changing landscape of email regulations, ensuring your emails are always compliant. Moreover, top-tier ESPs offer comprehensive analytics tools.
These tools can help you monitor your email campaigns, enabling you to spot any potential issues and adapt your strategy accordingly.
11. Test Your Emails
Regular email testing is a must for achieving high deliverability rates.
This is more than just sending a test email to a friend. Services such as GlockApps or Mail-Tester allow you to check if your emails are landing in the spam folder, if your IP has been blacklisted, or if you’re encountering other deliverability issues.
It’s also crucial to test your emails across different email clients (like Gmail, Yahoo, Outlook) and devices to ensure your content displays correctly.
Use A to B testing to see what subject lines, email content, or sending times resonate most with your audience, further optimizing your email marketing strategy.
12. Monitor Your Metrics
Metrics are your North Star in the realm of email deliverability. Key metrics such as delivery rate, open rate, click-through rate, and bounce rate can offer valuable insights into your performance.
For instance, a high bounce rate could indicate an outdated email list, while a low open rate might suggest your subject lines aren’t captivating enough.
Monitoring these metrics allows you to identify trends, spot issues, and make data-driven decisions to optimize your campaigns.
Remember, continuous improvement is the name of the game in email marketing.
Use these metrics to guide your strategy, refine your messaging, and ultimately, deliver emails your audience is eager to open.
Mastering the Art of Email Deliverability
You’ve weathered the storm, wrestling with frustration as your meticulously crafted emails vanished into spam.
But cheer up, there’s a silver lining.
With our email deliverability tips, your emails will no longer be the wallflower at the party, but the guest of honour in your recipient’s inbox.
So, stand tall, and let’s conquer that spam folder together!
By Sam Driver
Sam is an Associate Editor for Smart Blogger and family man who loves to write. When he’s not goofing around with his kids, he’s honing his craft to provide lasting value to anyone who cares to listen.
Meta is looking to capitalize on the 200 million users now catered to by WhatsApp Business, which previously stood at 50 million in 2020.
With ‘click-to-WhatsApp’ ads now accounting for a healthy part of the platform’s revenue, the Meta-owned messaging platform is now looking to expand that to allow users to generate ads without needing a Facebook account.
As part of the revisions to WhatsApp Business, companies will also be able to create ads for Facebook and Instagram from within the app.
WhatsApp Business is getting better
The news comes as Meta continues to battle a tough and changing economy during its so-called ‘Year of Efficiency’ which has seen it lay off thousands of workers and cut funding to several projects.
While the company has stated that it remains committed to artificial intelligence and the metaverse, ensuring a diverse selection of revenue streams is vital for sustainability, and given the four-fold increase in WhatsApp Business users, trying to eke out more money from SMBs and larger enterprises is a no-brainer.
The company has not confirmed how much the changes to its messaging app will cost, but we do know that the services will be chargeable thanks to a Reuters report.
Personalized messaging is also set to come to the platform, with the messaging app soon to be able to generate messages to customers to notify them of upcoming appointments, or for ecommerce businesses to advertise new products and sales, for example.
These changes represent an entire shift for the company as CEO Mark Zuckerberg looks to make more money from Meta’s messaging platforms, including WhatsApp and Messenger.
The company has not announced details on pricing as yet, but testing for the new features is expected to follow imminently before a full release, likely later this year.
With several years’ experience freelancing in tech and automotive circles, Craig’s specific interests lie in technology that is designed to better our lives, including AI and ML, productivity aids, and smart fitness. He is also passionate about cars and the decarbonisation of personal transportation. As an avid bargain-hunter, you can be sure that any deal Craig finds is top value!
Your copy is the heart of your email; any seasoned email marketer will tell you this. While the design language plays a crucial role in grabbing your readers’ eyeballs, it is ultimately the copy which determines your email’s value proposition. To be able to strike a connection with your subscribers, you need to engage them in a dialog and that is going to be pretty tough if your copy isn’t compelling to begin with, isn’t it?
Much has been said about the prowess of email as a communication channel- its mind-boggling user base, its more-than-impressive market penetration, and its jaw-dropping ROI. What you need to realize, however, is that if you want those virtues to be on your side, you first need to tick all the right boxes with your copy. If you are new to this, it can feel a little overwhelming at the outset. And that’s okay. No brand out there perfected their email copywriting game overnight. It’s a continuous journey, necessitating dollops of hard work, tenacity, and resourcefulness. As long as you are mindful of the best practices and stick to your process, stay assured that the results will follow.
What are these best practices, you ask? Well, that’s exactly what we’re going to talk about today. In this blog, we are going to dive deep into a host of email copywriting techniques (all tried and tested, of course) that will help you create a buzz with your campaigns. Excited to learn about them? Read on!
Nail Your Subject Lines and Preheaders
While reading a newspaper, to which articles do your eyes dart first? Invariably the ones that have a catchy and exciting headline, right? Well, the same principle more or less applies to emails as well, with subject lines acting as the equivalent of headlines in this realm. With subscribers’ inboxes getting bombarded with scores and hundreds of emails on a daily basis, email fatigue is something that most of them complain about. In this scenario, should your subject lines lack clarity and appeal, your emails stand a tough chance of fetching an open. After all, it is the first thing about your email that a recipient notices. And we all know the perils of casting a bleak first impression, don’t we?
That said, let us look at the ingredients that go behind cooking up the perfect subject line.
Attempt to pique your subscriber’s curiosity. Ever found yourself eagerly awaiting a movie’s release moments after watching its trailer? Now, if you were to dissect said trailer and pinpoint the exact factors that made it effective, you’ll pretty much have a blueprint for the ideal subject line on your hands. Much like a blockbuster trailer, an impactful subject line needs to walk the tightrope between generating interest and not giving away too much regarding the central plot.
Prioritize clarity. Whatever your email contains- an offer, an update, an announcement- strive to summarize it in your subject line. Essentially, make life easier for your subscribers. Don’t make them wait to scroll past the first fold of the email to discover its contents. See, providing clarity in your subject line won’t necessarily always translate into an open. At the end of the day, the recipient will only engage with the email if and only if they find the subject line relevant to their interest. However, you’ll still end up earning brownie points even when they ignore your emails, for you didn’t force them to open the email to find out what’s inside it.
Add emojis to your subject lines. Emoji-infused subject lines experience 56% higher open rates than their text-only counterparts. That said, this practice comes with a small caveat. Emojis are, by nature, informal, and thus not every business can afford to implement them. Hence, prior to using them, first, determine whether they align with your brand identity or not.
Weave sensory words into your subject lines. The better you help your readers visualize your content, the more inclined they will be to engage with your email. Besides, when your copy is peppered with sensory words, it leaves a firm and enduring impression in your subscribers’ minds. In a similar vein, make as much use of action words as possible. At all points in time, your goal should be to tacitly encourage your readers to not just read the email, but also commit themselves to the action stated within it; using action words will help you do so with ease.
Always, always pay attention to your subject line’s length. Curate the character limit keeping in mind that your emails will be accessed across both mobile and desktop devices. Most experts reckon 41 characters (approximately 6-8 words) to be the ideal length.
Now that we understand how to get subject lines right, let’s address preheaders. First off, why add a preheader text? Because they lend context to your subject lines, furthering its appeal. It boosts the efficacy of your subject line and gives subscribers an additional push to open your email. So, how to compose infallible preheaders? Let’s take a look.
Make them conversational. Subject lines are generally declarative in nature, right? Therefore, it pays to keep the preheader interrogative- essentially, attempt to foster a dialogue with the reader.
As with subject lines, the length of preheaders is quite critical too. Should it fail to fall within the prescribed limits (which is considered to be between 40-130 characters), your preheader will either pick up text from the email body or end up getting truncated.
Use preheaders to reinforce your email’s value proposition. You’ve already hinted at its content with the subject line. Now, with the preheader, you have the opportunity to paint an even clearer picture for your readers. Say, the subject line announces a new product. With the preheader, you could list out a few of its USPs. Now imagine just how potent this combination of a subject line and preheader text is.
To be sure what works best for your audience, write multiple versions of your preheader and A/B/n test them to identify the best-performing one.
Leverage Artificial Intelligence (AI)
With the rise of sophisticated AI models such as ChatGPT, it is no longer possible to look beyond the impact AI is going to have on all kinds of sectors in the very near future. In the email marketing space, businesses have already started to rely upon it to generate content ideas for their newsletters. The one place where AI lacks is giving nuances to its content pieces, making human intervention an absolute necessity. With the right operator, thus, you can take your email copywriting to the next level and perch yourself head and shoulder above your competitors.
Listed below are a few merits of composing your copy with AI.
Increased quality control- The content that an AI produces is, essentially, an amalgamation of everything that it consumes. AI copy, therefore, is the text that is generated with the assistance of natural language processing tools. You need to give the machine certain directions- the length of the email, its nature, the tone of the copy, and the like. Following this, the model gets to work, scanning through millions and millions of copies that are true to your parameters and at the end of it, produces a copy that is in the same vein. So, if you play your cards right, you can use AI to obtain large volumes of high-quality copy with stunning regularity.
Earlier, we discussed the need for A/B/n testing several subject lines and preheaders to be able to identify the most effective one. This is, of course, easier said than done. A dozen variants later, your mind tends to run in circles. Enter: AI. Based on your dozen inputs, it can spawn a hundred more along similar lines, giving you more options to contend with, and setting you on course to obtain fruitful results from your A/B/n tests.
The importance of personalization, especially in the current email marketing climate, cannot be overstated, even if one tried to. Despite this significance being common knowledge, brands still fail to marry their intentions to their actions, the primary culprit being a lack of resources. You see, implementing personalization is no walk in the park. From segmenting your email list to writing personalized content for each to ensuring they receive the right message at the right time, it can get incredibly demanding. This is where an AI writing tool comes as a boon. Once you’re able to determine the unique profile of each segment, you can feed your findings into the tool and trust it to not only write them personalized emails but also to replicate their tone while dealing with their reverts.
Generate FOMO
Invoking a sense of urgency in your reader’s minds is an excellent tactic to drive them into action, thereby amping up your engagement and interaction levels. While a tempting offer certainly captures your audience’s imagination, putting a ticking clock against it nudges them to take out their wallets and make a transaction. A great way of creating urgency in your emails is by adding countdown timers. There are many avenues where you can summon them into action- sales, new product launches, coupon redemption, early access, and seasonal events, to name a few. What’s more, making one is pretty convenient as well.
Broadly, you can take two routes toward crafting one. In a first way, you need to make an HTML timer and embed its code in the email. And no, you don’t have to write this code from scratch; there are many online tools that will help you with that. From your end, all you have to specify is your deadline and the time zone. These timers countdown in real-time. The second way is making a looped GIF animation. These are both, easier to execute than HTML emails and also carry a lesser risk of being flagged by email clients. However, with clients such as Outlook, that don’t provide desirable support to GIFs, you might face a few challenges.
Tell A Story
Right from the time man started gathering around open fires in small circles to the present where we dig our noses deep into Kindles for hours at end, our pursuit has been the same- a good story. Storytelling is as innate to us as our need for survival. It is the hallmark of writing, be it at any level or of any kind. Therefore, while writing your email copy, always, always, try to stitch a narrative. Information lies all around us, but we only feel compelled to engage with it when it is presented to us in the form of a narrative. Of course, marketing emails aren’t allowed to have the same level of flair and vigour as a work of fiction, but you still have enough liberty on your hands to take your audience on a journey every time you send them an email. In fact, they’ll keep coming back to you for more and more if you manage to strike all the right chords with your narrative.
Tickle Your Readers’ Funny Bones
As a brand, constantly worrying about your audience’s perception of you is not uncommon. Trying to win over them is a constant challenge, but there’s one tool that you can always rely upon to make life a touch easier- humour. Why?
Evocative content drives strong results. Email copy that is laced with humour leaves readers awash with positive emotions, causing them to warm up to your brand. Brands indulge in a silly vein every now and then to convince their subscribers that they are as imperfect and vulnerable as their subscribers and that in turn, goes a long way toward strengthening customer relationships.
Making humour one of your brand’s core values will make it extremely convenient for you to grab the spotlight. People always hold close to their hearts those who manage to bring a smile to their faces. Should your emails manage to do so, you’ll find yourself rewarded generously by your subscribers.
Wrapping It Up
Email copywriting can be a tough nut to crack, but once you get the lay of the land, there’s no looking back. We hope the tips and tricks given above will be able to breathe clarity into your copywriting endeavours.
Disha Bhatt (Dave) works as a Content Strategist at Email Uplers. She is a dentist, who has found her calling in words & technical subjects. She loves to pen down travelogues and romantic short stories in her free time.
Major brands are paying for ads on these sites and funding the latest wave of clickbait, according to a new report.
This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.
We’ve heard a lot about AI risks in the era of large language models like ChatGPT (including from me!)—risks such as prolific mis- and disinformation and the erosion of privacy. Back in April, my colleague Melissa Heikkilä also predicted that these new AI models would soon flood the internet with spam and scams. Today’s story explains that this new wave has already arrived, and it’s incentivized by ad money.
People are using AI to quickly spin up junk websites in order to capture some of the programmatic advertising money that’s sloshing around online, according to a new report by NewsGuard, exclusively shared with MIT Technology Review. That means that blue chip advertisers and major brands are essentially funding the next wave of content farms, likely without their knowledge.
NewsGuard, which rates the quality of websites, found over 140 major brands advertising on sites using AI-generated text that it considers “unreliable”, and the ads they found come from some of the most recognized companies in the world.Ninety percent of the ads from major brands were served through Google’s ad technology,despite the company’s own policies that prohibit sites from placing Google-served ads on pages with “spammy automatically generated content.”
The ploy works because programmatic advertising allows companies to buy ad spots on the internet without human oversight: algorithms bid on placements to optimize the number of relevant eyeballs likely to see that ad. Even before generative AI entered the scene, around 21% of ad impressions were taking place on junk “made for advertising” websites, wasting about $13 billion each year.
NewsGuard identifies these websites by using AI to check whether they contain text that matches the standard error messages from large language models like ChatGPT. Those flagged are then reviewed by human researchers.
Most of the websites’ creators are completely anonymous, and some sites even feature fake, AI-generated creator bios and photos.
As Lorenzo Arvanitis, a researcher at NewsGuard, told me, “This is just kind of the name of the game on the internet.” Often, perfectly well-meaning companies end up paying for junk—and sometimes inaccurate, misleading, or fake—content because they are so keen to compete for online user attention. (There’s been some good stuff written about this before.)
The big story here is that generative AI is being used to supercharge this whole ploy, and it’s likely that this phenomenon is “going to become even more pervasive as these language models become more advanced and accessible,” according to Arvanitis.
And though we can expect it to be used by malign actors in disinformation campaigns, we shouldn’t overlook the less dramatic but perhaps more likely consequence of generative AI: huge amounts of wasted money and resources.
What else I’m reading
Chuck Schumer, the Senate majority leader in the US Congress, unveiled a plan for AI regulation in a speech last Wednesday, saying that innovation ought to be the “North Star” in legislation. President Biden also met with some AI experts in San Francisco last week, in another signal that regulatory action could be around the corner, but I’m not holding my breath.
Political campaigns are using generative AI, setting off alarm bells about disinformation, according to this great overview from the New York Times. “Political experts worry that artificial intelligence, when misused, could have a corrosive effect on the democratic process,” reporters Tiffany Hsu and Steven Lee Myers write.
Last week, Meta’s oversight board issued binding recommendations about how the company moderates content around war. The company will have to provide additional information about why material is left up or taken down, and preserve anything that documents human rights abuses. Meta has to share that documentation with authorities, when appropriate as well. Alexa Koenig, the executive director of the Human Rights Centre, wrote a sharp analysis for Tech Policy Press explaining why this is actually a pretty big deal.
What I learned this week
The science about the relationship between social media and mental health for teens is still pretty complicated. A few weeks ago, Kaitlyn Tiffany at the Atlantic wrote a really in-depth feature, surveying the existing, and sometimes conflicting, research in the field. Teens are indeed experiencing a sharp increase in mental-health issues in the United States, and social media is often considered a contributing factor to the crisis.
The science, however, is not as clear or illuminating as we might hope, and just exactly how and when social media is damaging is not yet well established in the research. Tiffany writes that “a decade of work and hundreds of studies have produced a mixture of results, in part because they’ve used a mixture of methods and in part because they’re trying to get at something elusive and complicated.” Importantly, “social media’s effects seem to depend a lot on the person using it.”
The new Text2Speech model, Bark, was just introduced, and it has constraints on voice cloning and permits prompts to ensure user safety. However, scientists have decoded the audio samples, freed the instructions from constraints, and made them available in an accessible Jupyter notebook. Now, using just 5-10 seconds of audio/text samples, it is possible to clone a whole audio file.
What is Bark?
Suno’s groundbreaking Bark text-to-audio model is built on GPT-style models and can produce natural-sounding speech in several languages, in addition to music, noise, and basic sound effects. Suno developed the Bark text-to-audio paradigm using a transformer. In addition to making a natural-sounding speech in several languages, Bark can also create music, ambient noise, and basic sound effects. The model can also generate facial expressions, including smiling, frowning, and sobbing.
Bark uses GPT-style models to create speech with minimum fine-tuning, resulting in voices with a wide range of expressions and emotions that accurately reflect subtleties in tone, pitch, and rhythm. It’s an amazing experience that makes you question whether or not you’re talking to real people. Bark has impressively clear and accurate voice generation capabilities in several languages, including Mandarin, French, Italian, and Spanish.
How does it work?
Bark employs GPT-style models to produce audio from scratch, just as Vall-E and other incredible work in the area. In contrast to Vall-E, high-level semantic tokens incorporate the first text prompt instead of phonemes. Therefore, it may generalize to non-speech sounds, such as music lyrics or sound effects in the training data, in addition to speech. The entire waveform is then created by converting the semantic tokens into audio codec tokens using a second model.
Features
Bark has built-in support for several languages and can automatically detect the user’s input language. While English presently has the highest quality, other languages will improve as one scale. Therefore, Bark will use the natural accent for the corresponding languages when presented with code-switched text.
Bark is capable of producing any form of sound imaginable, including music. There is no fundamental distinction between speech and music in Bark’s mind. On occasion, though, Bark will instead create music based on words.
Bark can replicate every nuance of a human voice, including timbre, pitch, inflection, and prosody. The model also works to save environmental sounds, music, and other inputs. Due to Bark’s automated language recognition, you may utilize a German history prompt with English content, for instance. As a result, the resulting audio typically has a German accent.
Users can specify a certain character’s voice by providing prompts like NARRATOR, MAN, WOMAN, etc. These directions are only sometimes followed, especially if another audio history direction is supplied that conflicts with the first.
Performance
CPU and GPU (pytorch 2.0+, CUDA 11.7, and CUDA 12.0) implementations of Bark have been validated. Bark can produce near real-time audio on current GPUs using PyTorch every night. Bark demands running transformer models with over a hundred million parameters. Inference times might be 10–100 times slower on older GPUs, the default collab, or a CPU
Dhanshree Shenwai is a Computer Science Engineer and has a good experience in FinTech companies covering Financial, Cards & Payments and Banking domain with keen interest in applications of AI. She is enthusiastic about exploring new technologies and advancements in today’s evolving world making everyone’s life easy.
Tailoring prompts for ChatGPT means increasingly the effectiveness of the chatbot’s responses. Here are the best tried and tested prompts to bookmark.
ChatGPT has taken the world by storm since its release, with millions of users flocking to utilise its services at an unprecedented rate.
However, while some users have found the artificial intelligence (AI) chatbot to be a useful tool, others have been less than impressed, citing issues and limitations with their interactions with it.
One key factor to consider is the way in which users communicate with it. Simple commands may not always suffice, with users needing to employ more nuanced prompts to achieve their desired outcomes.
To help users make the most of ChatGPT’s capabilities, experts on social media platforms such as Twitter have been sharing valuable insights and strategies for effective communication with the chatbot.
Why is getting prompts right so important?
ChatGPT has been facing criticism for its inability to perform specific tasks accurately and its tendencies to lie and hallucinate. However, the secret to mastering ChatGPT and getting desired outcomes is choosing the correct prompts for it.
By using specific prompts, users can navigate the chatbot more effectively and achieve more personalised responses, unlocking the full potential of ChatGPT.
The importance of tailoring perfect prompts is so valuable that companies are recruiting experts who can communicate with chatbots effectively and a new job, AI prompt engineering, has emerged in the market with a salary range of up to $300 000 (€275 346).
Euronews Next has compiled a list of the five most useful prompts and put them to the test.
Prompt 1: Simplifying complex notions
Prompt: Hey ChatGPT. I want to learn about (insert specific topic). Explain (insert specific topic) in simple terms. Explain to me like I’m 11 years old.
ChatGPT explains blockchain for an 11 years oldChatGPT
ChatGPT’s ability to provide clarity, use simple language and provide explanations are top tier. When asked to explain blockchain in a way an 11-year-old understands, its oversimplification of complex notions helps users to understand things outside of their expertise and with no prior knowledge of technical terms required.
Prompt 2: Generate the perfect marketing plan
Prompt: I want you to act as an advertiser. You will create a campaign to promote a product or service of your choice. You will choose a target audience, develop key messages and slogans, select the media channels for promotion, and decide on any additional activities needed to reach your goals. My first suggestion request is, “I need help creating an advertising campaign for (insert description of service or product)”
ChatGPT use for marketing campaignsChatGPT
ChatGPT has access to the Internet’s database. It knows what people like, what appeals to them the most, what advertisements work well for companies and the marketing strategies to build a successful brand in any domain.
With ChatGPT on hand, the time when the success of marketing strategies is left in doubt or is a question of mere luck appears to be coming to an end.
So much so that individuals are using ChatGPT to build a whole company from scratch. Perhaps the interesting part of this development is that it is working, and by following simple step-by-step guides from the chatbot, users have been able to launch businesses and generate profit.
Prompt 3: Take advantage of expert consulting
Prompt: I will provide you with an argument or opinion of mine. I want you to criticise it as if you were <person>
Person: (insert expert name)
Argument: (insert desired topic)
ChatGPT use for expert opinion from Elon MuskChatGPT
No one is better at providing money-making advice than the richest man in the world. Thanks to successful people’s presence online like billionaire Elon Musk, ChatGPT is able to easily mimic their thinking process and personify them to provide relevant and helpful advice to users.
Prompt 4: Job interview simulations
Prompt: Simulate a job interview for (insert specific role). Context: I am looking for this job and you are the interviewer. You will ask me appropriate questions as if we were in an interview. I will respond. Only ask the following question once I have responded.
Simulating job interviews using ChatGPTChatGPT
Provide the chatbot with enough context about the job you’re interviewing for and let it do its magic. This is a great way to practice your interview responses and get an overall idea of what questions you might get asked.
As you provide the chatbot with more and more information when responding, it will tailor its questions more effectively.
Prompt 5: Make ChatGPT write like you
Prompt: [Insert Text]
Write about (insert text topic) as the above author would write.
ChatGPT mimics writing style based on writing sampleChatGPT
One of the many complaints people have about chatGPT is its inability to provide content tailored to each user. This leaves many complaining about the dullness of the responses and how in some cases it can easily be guessed that an AI wrote the piece.
However, when using the correct prompt, ChatGPT is capable of mimicking one’s own writing style and providing personalised responses.
For businesses and their customers, the answers to most questions rely on data that is locked away in enterprise systems. Here’s how to deliver that data to GPT model prompts in real time.
By this point, just about everybody has had a go playing with ChatGPT, making it do all sorts of wonderful and strange things. But how do you go beyond just messing around and using it to build a real-world, production application? A big part of that is bringing together the general capabilities of ChatGPT with your unique data and needs.
What do I mean by that? Let me give you an example of a scenario every company is thinking about right now. Imagine you’re an airline, and you want to have an AI support agent help your customers if a human isn’t available.
Your customer might have a question about how much it costs to bring skis on the plane. Well, if that’s a general policy of the airline, that information is probably available on the internet, and ChatGPT might be able to answer it correctly.
But what about more personal questions, like
Is my flight delayed?
Can I upgrade to first class?
Am I still on the standby list for my flight tomorrow?
It depends! First of all, who are you? Where and when are you flying? What airline are you booked with?
ChatGPT can’t help here because it doesn’t know the answer to these questions. This isn’t something that can be “fixed” by more innovation at OpenAI. Your personal data is (thankfully) not available on the public internet, so even Bing’s implementation that connects ChatGPT with the open web wouldn’t work.
The fundamental obstacle is that the airline (you, in our scenario) must safely provide timely data from its internal data stores to ChatGPT. Surprisingly, how you do this doesn’t follow the standard playbook for machine learning infrastructure. Large language models have changed the relationship between data engineering and model creation. Let me explain with a quick diagram.
Confluent
In traditional machine learning, most of the data engineering work happens at model creation time. You take a specific training data set and use feature engineering to get the model right. Once the training is complete, you have a one-off model that can do the task at hand, but nothing else. Most of the problem-specific smarts are baked in at training time. Since training is usually done in batch, the data flow is also batch and fed out of a data lake, data warehouse, or other batch-oriented system.
With large language models, the relationship is inverted. Here, the model is built by taking a huge general data set and letting deep learning algorithms do end-to-end learning once, producing a model that is broadly capable and reusable. This means that services like those provided by OpenAI and Google mostly provide functionality off reusable pre-trained models rather than requiring they be recreated for each problem. And it is why ChatGPT is helpful for so many things out of the box. In this paradigm, when you want to teach the model something specific, you do it at each prompt. That means that data engineering now has to happen at prompt time, so the data flow problem shifts from batch to real-time.
What is the right tool for the job here? Event streaming is arguably the best because its strength is circulating feeds of data around a company in real time.
In this post, I’ll show how streaming and ChatGPT work together. I’ll walk through how to build a real-time support agent, discuss the architecture that makes it work, and note a few pitfalls.
While there’s no shortage of in-depth discussion about how ChatGPT works, I’ll start by describing just enough of its internals to make sense of this post.
ChatGPT, or really GPT, the model, is basically a very large neural network trained on text from the internet. By training on an enormous corpus of data, GPT has been able to learn how to converse like a human and appear intelligent.
When you prompt ChatGPT, your text is broken down into a sequence of tokens as input into the neural network. One token at a time, it figures out what is the next logical thing it should output.
Human: Hello.
AI: How
AI: How can
AI: How can I
AI: How can I help
AI: How can I help you
AI: How can I help you today?
One of the most fascinating aspects of ChatGPT is that it can remember earlier parts of your conversation. For example, if you ask it “What is the capital of Italy?”, it correctly responds “Rome”. If you then ask “How long has it been the capital?”, it’s able to infer that “it” means Rome as the capital, and correctly responds with 1871. How is it able to do that?
ChatGPT has something called a context window, which is like a form of working memory. Each of OpenAI’s models has different window sizes, bounded by the sum of input and output tokens. When the number of tokens exceeds the window size, the oldest tokens get dropped off the back, and ChatGPT “forgets” about those things.
Confluent
As we’ll see in a minute, context windows are the key to evolving ChatGPT’s capabilities.
Making GPT-4 understand your business
With that basic primer on how ChatGPT works, it’s easy to see why it can’t tell your customer if their flight was delayed or if they can upgrade to first class. It doesn’t know anything about that. What can we do?
The answer is to modify GPT and work with it directly, rather than go through ChatGPT’s higher-level interface. For the purposes of this blog post, I’ll target the GPT-4 model (and refer to it as GPT hereafter for concision).
There are generally two ways to modify how GPT behaves: fine-tuning and search. With fine-tuning, you retrain the base neural network with new data to adjust each of the weights. But this approach isn’t recommended by OpenAI and others because it’s hard to get the model to memorize data with the level of accuracy needed to serve an enterprise application. Not to mention any data it’s fine-tuned with may become immediately out of date.
That leaves us with search. The basic idea is that just before you submit a prompt to GPT, you go elsewhere and look up relevant information and prepend it to the prompt. You instruct GPT to use that information as a prefix to the prompt, essentially providing your own set of facts to the context window at runtime.
Confluent
If you were to do it manually, your prompt would look something like this:
You are a friendly airline support agent. Use only the following facts to answer questions. If you don’t know the answer, you will say “Sorry, I don’t know. Let me contact a human to help.” and nothing else.
The customer talking to you is named Michael.
Michael has booked flight 105.
Michael is flying economy class for flight 105.
Flight 105 is scheduled for June 2nd.
Flight 105 flies from Seattle to Austin.
Michael has booked flight 210.
Michael is flying economy class for flight 210.
Flight 210 is scheduled for June 10th.
Flight 210 flies from Austin to Seattle.
Flight 105 has 2 first class seats left.
Flight 210 has 0 first class seats left.
A customer may upgrade from economy class to first class if there is at least 1 first class seat left on the flight and the customer is not already first class on that flight.
If the customer asks to upgrade to first class, then you will confirm which flight.
When you are ready to begin, say “How can I help you today?”
Compared to fine-tuning, the search approach is a lot easier to understand, less error-prone, and more suitable for situations that require factual answers. And while it might look like a hack, this is exactly the approach being taken by some of the best-known AI products like GitHub Copilot.
So, how exactly do you build all this?
Constructing a customer 360
Let’s zoom out for a minute and set GPT aside. Before we can make a support agent, we have to tackle one key challenge—we need to collect all of the information that could be relevant to each customer.
Going back to the example of whether a customer can upgrade to first class, remember that the answer depends on a lot of different factors for the particular flight. To have enough context to answer it, you need to consolidate the data for:
Customer identity
Upcoming booked flights for the customer
Seat layout of the plane assigned to the flight
Current capacity for the flight
Rewards points for free upgrades
For most companies, this data is spread across a bunch of different systems like databases, data warehouses, SaaS applications, queues, and file systems. Much of it is not built to be queried interactively at low latency, and none of it is arranged to be easily consolidated. Communication between these systems is point-to-point, making it incredibly difficult to get a unified view of the data.
Confluent
Event streaming is a good solution to bring all of these systems together. By tapping into feeds of information as each of them changes, you can construct a unified view of each customer that’s easy to query with low latency.
Confluent
Confluent’s connectors make it easy to read from these isolated systems. Turn on a source connector for each, and changes will flow in real time to Confluent.
Because these streams usually contain somewhat raw information, you’ll probably want to process that data into a more refined view. Stream processing is how you transform, filter, and aggregate individual streams into a view more suitable for different access patterns. You probably want to ultimately sink that view into a relational database, key/value store, or document store.
Connecting the customer data to GPT
With the customer 360 data turned into a unified view, the next step is to programmatically connect that information with each prompt. At this point, the architecture looks like this:
Confluent
And the support agent interaction looks like this:
AI: How can I help you today?
Human: Hi. I’d like to upgrade my seat to first class. Can you do that?
AI: Absolutely, I’d be happy to assist with that. However, could you please specify which flight you’d like to upgrade to first class? Is it Flight 105 or Flight 210?
Human: 105 please
AI: Great news, Michael. We have 2 first class seats left on Flight 105. I can upgrade your economy class seat to first class for this flight. Shall I proceed with this change for you?
Human: Yep.
AI: Excellent, Michael. I’ve successfully upgraded your seat to first class on Flight 105. You will now enjoy all the perks and amenities of our first class service. Let me know if there’s anything else I can assist you with today. Safe travels!
Human: Can you upgrade my 210 seat to first class too?
AI: I’m sorry, Michael, but currently there are no first class seats left on Flight 210. Therefore, I’m unable to upgrade your seat for this flight. If there’s anything else you need help with, please let me know.
So when the application receives a chat message, all it needs to do is look up the customer ID who sent it, query the customer 360 view, and supply a block of useful information at the start of every prompt—similar to what I showed in the manual pre-prompt.
Connecting your knowledge base to GPT
This technique works great for questions about an individual customer, but what if you wanted the support agent to be broadly knowledgeable about your business? For example, if a customer asked, “Can I bring a lap infant with me?”, that isn’t something that can be answered through customer 360 data. Each airline has general requirements that you’d want to tell the customer, like that they must bring the child’s birth certificate.
Information like that usually lives across many web pages, internal knowledge base articles, and support tickets. In theory, you could retrieve all of that information and prepend it to each prompt as I described above, but that is a wasteful approach. In addition to taking up a lot of the context window, you’d be sending a lot of tokens back and forth that are mostly not needed, racking up a bigger usage bill.
How do you overcome that problem? The answer is through embeddings. When you ask GPT a question, you need to figure out what information is related to it so you can supply it along with the original prompt. Embeddings are a way to map things into a “concept space” as vectors of numbers. You can then use fast operations to determine the relatedness of any two concepts.
OK, but where do those vectors of numbers come from? They’re derived from feeding the data through the neural network and grabbing the values of neurons in the hidden layers. This works because the neural network is already trained to recognize similarity.
To calculate the embeddings, you use OpenAI’s embedding API. You submit a piece of text, and the embedding comes back as a vector of numbers.
Since we’re going to use embeddings for all of our policy information, we’re going to have a lot of them. Where should they go? The answer is in a vector database. A vector database specializes in organizing and storing this kind of data. Pinecone, Weaviate, Milvus, and Chroma are popular choices, and more are popping up all the time.
Confluent
As a quick aside, you might be wondering why you shouldn’t exclusively use a vector database. Wouldn’t it be simpler to also put your customer 360 data there, too? The problem is that queries against a vector database retrieve data based on the distance between embeddings, which is not the easiest thing to debug and tune. In other words, when a customer starts a chat with the support agent, you absolutely want the agent to know the set of flights the customer has booked. You don’t want to leave that up to chance. So in this case it’s better to just query your customer 360 view by customer ID and put the retrieved data at the start of the prompt.
With your policies in a vector database, harvesting the right information becomes a lot simpler. Before you send a prompt off to GPT, you make an embedding out of the prompt itself. You then take that embedding and query your vector database for related information. The result from that query becomes the set of facts that you prepend to your prompt, which helps keep the context window small since it only uses relevant information.
Confluent
That, at a very high level, is how you connect your policy data to GPT. But I skipped over a lot of important details to make this work. Time to fill those in.
Syncing your knowledge base to the vector database
The next step is to get your policy information into the vector database. The biggest decision to make here is how you’ll chunk the data.
Chunking refers to the amount of data that you put together in one embedding. If the chunk size is too large or too small, it’ll be harder for the database to query for related information. To give you an idea of how this works in other domains, you might choose to chunk a Wikipedia article by section, or perhaps by paragraph.
Now, if your policies change slowly or never change, you can scrape all of your policy documents and batch upload them to the vector database, but a better strategy would be to use stream processing. Here again, you can set up connectors to your file systems so that when any file is added or changed, that information can be made rapidly available to the support agent.
If you use stream processing, sink connectors help your data make the final jump, moving your embeddings into the vector database.
Confluent
Tying it all together
We’re now ready to bring all of this together into a working example. Here’s what the architecture looks like:
Confluent
This architecture is hugely powerful because GPT will always have your latest information each time you prompt it. If your flight gets delayed or your terminal changes, GPT will know about it during your chat session. This is completely distinct from current approaches where the chat session would need to be reloaded or wait a few hours (or days) for new data to arrive.
And there’s more. A GPT-enabled agent doesn’t have to stop at being a passive Q/A bot. It can take real action on your behalf. This is again something that ChatGPT, even with OpenAI’s plugins, can’t do out of the box because it can’t reason about the aftereffects of calling your internal APIs. Event streams work well here because they can propagate the chain of traceable events back to you. As an example, you can imagine combining command/response event pairs with chain-of-thought prompting to approach agent behaviour that feels more autonomous.
The ChatGPT Retrieval Plugin
For the sake of giving a clear explanation about how all of this works, I described a few things a bit manually and omitted the topic of ChatGPT plugins. Let’s talk about that now.
Plugins are a way to extend ChatGPT and make it do things it can’t do out of the box. New plugins are being added all the time, but one in particular is important to us: the ChatGPT Retrieval Plugin. The ChatGPT Retrieval Plugin acts as a sort of proxy layer between ChatGPT and the vector database, providing the glue that allows the two to talk to each other.
In my example, I illustrated how you’d receive a prompt, make an embedding, search the vector database, send it to GPT, and so on. Instead of doing that by hand, the ChatGPT Retrieval Plugin makes the right API calls back and forth on your behalf. This would allow you to use ChatGPT directly, rather than going underneath to OpenAI’s APIs, if that makes sense for your use case.
Keep in mind that plugins don’t yet work with the OpenAI APIs. They only work in ChatGPT. However, there is some work going on in the LangChain framework to sidestep that.
If you take this approach, one key change to the architecture above is that instead of connecting Apache Kafka directly to the vector database, you’d want to forward all of your customer 360 data to the Retrieval plugin instead—probably using the HTTP sink connector.
Confluent
Whether you connect these systems manually or use the plugin, the mechanics remain the same. Again, you can choose whichever method works best for your use case.
Capturing conversation and fine-tuning
There’s one last step to tidy up this example. As the support agent is running, we want to know what exactly it’s doing. What’s a good way to do that?
The prompts and responses are good candidates to be captured as event streams. If there’s any feedback (imagine an optional thumbs up/down to each response), we can capture that too. By again using stream processing, we can keep track of how helpful the agent is from moment to moment. We can feed that knowledge back into the application so that it can dynamically adjust how it constructs its prompt. Think of it as a bit like working with runtime feature flags.
Confluent
Capturing this kind of observability data unlocks one more opportunity. Earlier I mentioned that there are two ways to modify how GPT behaves: search and fine-tuning. Until now, the approach I’ve described has centered on search, adding information to the start of each prompt. But there are reasons you might want to fine-tune, and now is a good time to look at them.
When you add information to the start of a prompt, you eat up space in the context window, eroding GPT’s ability to remember things you told it in the past. And with more information in each prompt, you pay more for tokens to communicate with the OpenAI APIs. The incentive is to send the least amount of tokens possible in each prompt.
Fine-tuning is a way of side-stepping those issues. When you fine-tune a machine learning model, you make small adjustments to its neural network weights so that it will get better at a particular task. It’s more complicated to fine-tune a model, but it allows you to supply vastly more information to the model once, rather than paying the cost every time a prompt is run.
Whether you can do this or not depends on what model you’re using. This post is cantered around the GPT-4 model, which is closed and does not yet permit fine-tuning. But if you’re using an open-source model, you have no such restrictions, and this technique might make sense.
So in our example, imagine for a moment that we’re using a model capable of being fine-tuned. It would make sense to do further stream processing and join the prompt, response, and feedback streams, creating a stream of instances where the agent was being helpful. We could feed all of those examples back into the model for fine-tuning as human-reinforced feedback. (ChatGPT was partly constructed using exactly this technique.)
Keep in mind that any information that needs to be real-time still needs to be supplied through the prompt. Remember, fine-tuning only happens once offline. So it’s a technique that should be used in conjunction with prompt augmentation, rather than something you’d use exclusively.
Known limitations
As exciting as this is, I want to call out two limitations in the approach outlined in this article.
First, this architecture predominantly relies on the context window being large enough to service each prompt. The supported size of context windows is expanding fast, but in the short term, this is a real limiter.
The second is that prompt injection attacks are proving challenging to defend against. People are constantly finding new ways to get GPT to ignore its previous instructions, and sometimes act in a malicious way. Implementing controls against injection will be even more important if agents are empowered to update existing business data as I described above.
In fact, we’re already starting to see the practical choices people are making to work around these problems.
Next steps
What I’ve outlined is the basic framework for how streaming and GPT can work together for any company. And while the focus of this post was on using streaming to gather and connect your data, I expect that streaming will often show up elsewhere in these architectures.
I’m excited to watch this area continue to evolve. There’s clearly a lot of work to do, but I expect both streaming and large language models to mutually advance one another’s maturity.
Feature Image Credit: Think Stock
By Michael Drogalis
Michael Drogalis is a principal technologist on the TSG team at Confluent, where he helps make Confluent’s developer experience great. Before joining Confluent, Michael served as the CEO of Distributed Masonry, a software startup that built a streaming-native data warehouse. He is also the author of several popular open source projects, most notably the Onyx Platform.