When data influences people at the corporate level, trouble is sure to follow
A personal opinion piece by Douglas James.
I’ve written for Bytagig for near a couple of years now, and it’s been an engaging experience diving into the world of IT and cybersecurity. The nature of what I discuss keeps me in tandem with how technological changes influence the cyber domain. Mostly, I pay attention to the security layers. And for good reason. The world is, frankly, ill-prepared for a lot of major cybersecurity threats, as demonstrated by the COVID-19 pandemic and success of ransomware.
In other words, we’re more reliant on tech and data than ever before, though we lack a basic critical understanding of avoiding threats. Phishing is still prominent and successful, demonstrating the power of social engineering schema, for example.
But beyond this, I run into some concepts which I find, frankly, alarming in the long term.
When data becomes invasive
Let me cut through the fat and drop the “business pretense,” because I think a frank and honest discussion about our data, our personal information, is important. Data firms and companies numbering in the hundreds carve up PII like a Christmas goose. It’s info collected based on web browsing habits and tracking tools.
And while I do think it’s understandable to collect information to better understand your customer base, I don’t like the lack of consent. No, I don’t mean Cookies. I mean how the global net sends our info willy-nilly to whoever (though mainly for commercial and advertising reasons). I’m okay with a private website enterprise “checking out” a visitor, but if someone asked me directly, no, I wouldn’t give them the thumbs up to sell my habits to a faceless void of reports and people I’ll never see.
I don’t like it, because I’m not foolish enough to believe digital monopolies are an innocent gang of aw-shucks friends who just want the best for me. They’re boardroom meetings and suits, working together towards the best way to inject an ad into your brain so it has a chance to fight through the electronic muck we routinely drown in.
It may sound cynical, but please. Let’s not pretend tech giants are our benefactors. It’s critical we make that distinction to protect ourselves, as personal cyber defense is often a measure of that. If you don’t believe me, check a general EULA for a digital product (games media is a prime example). Heck, certain apps and software you install will dive into your personal life and flip through it like it belongs to them. Imagine a stranger in a suit rustling through personal pictures and info, writing down what they like, and leaving. In your house.
And on to the IoB
Now, hang on, I’m going somewhere with this. You get the idea – data and personal info are bought and sold in a deregulated dark market for curious business reasons, among things. What does that have to do with my main issue – the “Internet of Behaviors?”
Is that like the IoT? A bit. Replace things with people and you’ve got an idea. Essentially, the idea is the IoB seeks to influence and modify an individual’s (or group) behavior by collecting data on them. How said data is collected depends on the tools. A smart tracker in the bathroom to see how often hands are washed. Cameras observing people to check on where they go and what they do at certain times. Got a company car? You already have examples of trackers to see how they perform on the road.
Now, I hear you: these are tools for enforcing work protocol, right? It sounds reasonable, doesn’t it? Well, something is reasonable, always, on paper.
How it impacts privacy laws is one thing. How it’s used and implemented is another. It’s why I brought up the nature of personal information, something which readily demonstrates how the modern world isn’t all that concerned with our privacy. Oh sure, it’s nice to not know in the grand scheme. But if somebody showed me my own personal digital footprint and who had all my data (and what), I’d probably have a panic attack.
The IoB Ethical Implications
Ethics is, ultimately, the nature of morality, and given how we often view morality as “subjective,” then relying on the good graces of digital tech giants to use their tools and powers responsibly is. . . optimistic.
The question, firstly, is a personal one. How much of a right do you think a business or corporation has to your personal info? And I don’t mean walking into a store while being recorded on a security camera. I mean right down to your habits, things you do, unconsciously or otherwise? A lot of people in the US, for instance, don’t like the idea of heavy government and/or political influence in their daily lives. Why then would (or should) a business get a free pass?
Now I realize that’s a loaded question. But it frames it in such a way: if you don’t like Federal powers getting into your personal life, tech giants shouldn’t get the green flag either.
As for collecting information, other ethical implications get thrown into the mix: namely modifying behavior. How and to what extend, I suppose, remains at the mercy of a business and their practice. But I don’t know about you, I’m not comfortable with the concept of coordinated campaigns to influence me on a subtle level in hopes to guide me to a decision that, ultimately, wasn’t always mine.
There’s also the massive issue of security. Cyber breaches and security threats are common and routinely successful, in both due to remote working and sophisticated threat actors. What the IoB does, then, is create an entirely new subset of extremely valuable data for hackers to have a go at. And with it, in a threat scenario, said data could be used to launch powerful phishing campaigns aimed at individuals, SMBS, and larger businesses.
Ultimately, it’s not a tech philosophy that will roll out in a week and suddenly we’re at the mercy of tech companies. It will, hopefully, receive a lot of scrutiny and oversight in the coming years and will certainly see pushback from privacy laws.
The best thing now, though, is to remain aware of it as a steadily growing concept.