Get ready for upload

Alexandra Samuel
6 min readApr 10, 2024
A snapshot of ROBO+A.I.Lex, a generative AI based on our online history.

Hybrid work warmed us up for the world of generative AI in many ways: When you’re already used to interacting with your colleagues via chat rather than in person, it’s easier to get used to riffing with a chat-based AI. When you’re constantly overloaded by email and messages, it’s much more appealing to adopt AI tools that help you triage incoming messages and quickly formulate your replies.

But there’s another, more insidious way hybrid has set the stage for AI: Our shift to hybrid work has accidentally made it easier and more tempting to invade other people’s privacy and appropriate their words, work and ideas. That risk lies not only in how employers or big companies can rip us off, but how we can rip off one another.

Improbably enough, I recognized this danger through the process of creating our annual family Valentine. Every February 14, we send about 450 people an email newsletter about the past year of our family life. (If you want to see next year’s newsletter, you can sign up here.) Most years we do some kind of creative tech project as part of this newsletter; this year, my husband and I decided to reincarnate as an AI named “ROBO+A.I.Lex” and write our friends as our new, cloud-based selves.

How we built our own brains

The job of creating an AI persona turned out to be pretty straightforward: I got a decent version working as a custom GPT, but it’s only accessible to people who are paying for a ChatGPT subscription. So I created a much, much better version of ROBO+A.I.Lex with a service called CustomGPT, which doesn’t require users to have a login or account, and allowed me to upload a much larger knowledge base, so it feels a lot more like us.

The amazing and scary thing is that it took absolutely no technical skill or content-ownership validation for me to build a CustomGPT based on the files I had kicking around my hard drive, like my exported Facebook posts and a backup of my blog. Anyone could take any files they have on hand, and build a GPT based on that data.

What AI knows about your friends

So far I’ve told you how easy it was to model our own brains. But what about modelling other people’s?

Well, that wasn’t quite as easy. We wanted to customize each outgoing Valentine based on our relationship with the recipient, and I’m pleased to tell you that Facebook and LinkedIn have learned from the Cambridge Analytica scandal, and make it damn hard to extract information on other people (yay!), unless you want to spend a lot of time copying and pasting. So we largely relied on our own email histories, which we synced to Coda, and then analyzed using Coda’s GPT plugins. (Coda’s AI documentation specifically notes that customer data is never added to any AI training models, so we were able to do this without exposing anyone’s data.)

At the end of the day, all of this took way more time than any sane human would spend on customizing a Valentine newsletter. I could easily have handwritten a Valentine to every single person on this list in the time I spent figuring out how to AI-customize each email.

The only limitation is human decency

But the process of figuring all this out gave me a whole new perspective on what AI makes possible — and what that means for our relationships with employers, and with one another.

Let me be super clear: The AI model I built from my own past online history is something I could do almost as well for many, many other people — no permission required. It might be a small hassle, it would probably require violating some terms of service, and it might break privacy laws in some jurisdictions, but it’s really easy, and doesn’t necessarily require any technical skills beyond copying and pasting.

We set limits on how and what we’d use to build our Valentine, but we were limited by human decency and an understanding of privacy risks, rather than by any technological obstacles. The kind of creepy, privacy-invading data profiling that we’ve come to expect from giant tech platforms? That creepiness is now an option for just about anyone.

Anyone who has access to your past social media posts, emails or work product has the ingredients to build a personal AI based on you. It’s not even hard to do! And it’s easy to imagine many scenarios where someone might be tempted to put this power to work, from the well-meaning (“if I build a model based on all of Jin’s past emails, I won’t need to pester her so often”) to the manipulative (“if I build a model of this executive based on all her past online posts, I can practice until I have the perfect sales pitch”).

What may be a limit are little things like privacy and intellectual property laws. But that’s only an obstacle in a context where your data belongs to you.

How hybrid work accelerates the brain-building process

And of course, many of us spend a lot of our days generating words and ideas that do not belong to us. I’m talking about your working life, where things like the emails you send or the documents you write belong to your employer, not you.

That might have sounded scary five years ago, but it’s much scarier today. After all, the work world of 2019 consisted of a whole lot of interactions that would be impossible to digitize and mine: Unless you’re recording and transcribing everything that happens in your company’s meeting rooms, most in-person conversations are useless when it comes to building an AI.

But now, thanks to hybrid work, much of our working lives are captured and digitized. I’m talking about all the meeting whiteboards that have been replaced by Google Docs and Miro boards. I’m talking about the hallway conversations that are now emails and Slack messages. I’m talking about every single recorded video meeting — because even if it hasn’t been transcribed yet, it could be transcribed, and those transcriptions are excellent fodder for an AI

Hybrid and remote work means that more and more of our work product exists in a form that can be uploaded to AIs. It’s already trivially easy to upload data and turn that into an AI knowledge base, and it’s only going to get easier from here.

Those authors and publications suing over how their words got absorbed into ChatGPT’s training data? They’re speaking for all of us — because any of us could have our work used by our employers as the basis for training AIs to take on portions of our work.

How to prevent your own upload

There’s not a lot any of us can do in our individual working lives to avoid this outcome. It’s not practical to refuse to use email or Slack or Zoom because you’re worried about your ideas getting digitized, and once you’ve generated that digital trail, you may not have a lot of say in how it gets used.

But we can encourage policymakers and employers to set a standard for how the work of individual employees and contributors gets incorporated into employer AIs. That might look like setting minimums and maximums for how employee data gets folded into an organization’s collective knowledge base: For example, we might require that any one AI encompass the work of at least 20 employees so that no one person can be effectively duped by an AI. (Think of this as comparable to how Facebook set a minimum audience size for ad targeting after researchers found that it was possible to target a single Facebook user via ad settings.) And we might set a maximum on how many messages or documents can be assimilated from any one employee, so that no more than ten or twenty percent of your digital footprint can be permanently absorbed into the company’s brain.

Perhaps most important, we can start to rethink the nature of intellectual property itself — and to get serious about the idea of I.P. rights for employees as well as corporations. If we’re going to build a world in which our knowledge and experience is assimilated into AI profiles, then we should get an ownership stake in the AIs that are built in our collective image.

Because a world where it’s so easy to scrape the content of someone else’s life and ideas is a world where AIs may indeed take over a great many of our jobs. There’s no reason they should take our incomes, too.

--

--

Alexandra Samuel
Alexandra Samuel

Written by Alexandra Samuel

Speaker on hybrid & remote work. Author, Remote Inc. Contributor to Wall Street Journal & Harvard Business Review. https://AlexandraSamuel.com/newsletter

No responses yet