Driving back from work one night last summer, I had a very strange technological experience.

I’d been working on assignment near Boston several days in a row, finishing quite late and returning to the hotel I’d been put up in near my employer’s office about 12 miles away. I was not familiar with the area so I had been in the habit of using Google Maps’ GPS on my phone each time I commuted back and forth. Uninterested in the empty exurbs I was driving through well after midnight following another long day of work, I turned on a podcast and dutifully followed the GPS line on the screen as I’d done the nights before. It led me along a now-familiar path out of the neighborhood I was in and onto the highway.

Then, at one point along the empty eight-lane road, the GPS indicated my exit was approaching. I got in the right lane and slowed down. But something seemed unfamiliar. I dutifully followed the line on my phone’s screen, yet there was a dissonance in what I was seeing – was this the right exit? I didn’t know. Not recognizing the signs but unsure what the right road should be, I had no better choice but to go where I was told.

Exiting the off-ramp with no one around, I turned right and after a short distance, I stopped. The road before me ended in a cul-de-sac. Confused, I picked up the GPS and zoomed out – where was it trying to get me to go? The answer – the line I was following – had wanted me to exit the off-ramp, turn right to a dead end, turn around, and get right back on the highway for another few miles. Exactly what I had done.

I suddenly realized that I had been tricked. Sure, perhaps it was a glitch in the system, a random bug that, after having correctly directed me along the exact same route four nights in a row, this time gave me erroneous information. But instinct told me this was not random.

In mid 2014 Facebook received a week of bad PR after it was discovered that they had partnered in a behavioral study that tested responses of its users without their knowledge. I remember hearing an interview somewhere about this that had gone on to explain how common such tests are: when a tech company has millions of users on its network at any given moment, it’s incredibly easy to just grab samples of a few hundred thousand at a time to test how they might respond to a subtle change. It happens every day and users never know they are being tested, but in effect they are lab rats helping companies collect data all the time.

If someone had just run a test on me, I had failed. I had driven the same route half a dozen times or more, using the GPS each time, and apparently had put so little effort into actually learning the route that I could be misdirected from it with ease. I was the cartoon roadrunner, tricked by repainting the centerline of the road in a new direction. I imagined the algorithm that had been designed to watch users’ habits, identify me as someone who should know the route by now without a GPS, and test my dependence on it. If that were the case, I had just failed that test, and it felt deeply upsetting.

Getting back on the road in my indignant anger at Google, turning off the device in paranoia and swearing to myself I would rely on it less in the future, a different thought soon occurred to me. Yes, I was upset about the prospect of having been tricked, but the actual effect now could end up being positive. In a way, I had been punished for my technological dependence in a very minor way. It could even be said, I’d been taught a lesson. Could it be that this might actually be desirable?

If that were true, Google could be seen as actually having helped me. Detecting that I was using my GPS more often than necessary, the algorithm added a glitch in order to prod me, to bring it to my attention. (After all, technology is only invisible to us until it breaks.) Perhaps the true test of the experiment was to see if I would go on to use my GPS less after the event?

Although my first reaction was, “Is Google testing the limits of how blindly I will follow its instructions?” in the end, I was left with a different question: “Could technology be programmed to help us rely on it less?”

Given the technological tools available, it seems likely to be possible in some circumstances. But of course, not relying on technology goes in opposition to the goals of most technology companies, who work hard to keep us connected to and dependent upon their systems. Thinking about this leads us straight into bigger questions about what types of behavior should be encouraged or discouraged, who’s interests that might serve, and who should be allowed to make such decisions and enact them.

Presently, Google will do what’s in Google’s own interests, just as every other technology company will do the same. So even though it might be possible to use technology to free us from technology, at the moment we users are probably on our own to take responsibility for our technological habits, dependencies, and awareness of how we are being influenced. Until other forces intervene on our behalf, it is likely that the technology we use will continue to attempt to lead us where it wants to.