Privacy, Artificial Intelligence and Congress
Rep. Warren Davidson
By U.S. Rep. Warren Davidson
R-Ohio's 8th District
As Artificial Intelligence (AI) continues to grow in both popularity and overall ability, Congress needs to act proactively — not reactively — to protect the American people by passing comprehensive frameworks to address concerns with AI. Recently named one of Time Magazine’s 100 Most Influential Voices on AI, Pope Leo XIV recently expressed similar concerns and spoke of the need to “use these gifts [of technology] wisely, ensuring they serve the common good.” What’s more, a new poll from Pew Research shows that over 53% of Americans say AI is doing more to hurt than help people keep their personal information private.
Unfortunately, Congress hasn’t taken AI regulatory issues seriously — including the privacy implications. Many of the worst risks with AI don’t come from the technology alone but from the absence of strong privacy safeguards. Without clear limits on how data is collected, stored, and shared, AI becomes a powerful tool for exploitation and surveillance. That’s why any serious conversation about Artificial Intelligence begins with privacy.
Privacy forms the base layer for ethical Artificial Intelligence. How do you safeguard your own data? Who is liable if your data is compromised, especially by AI? Does simply entering a query into a search engine or engaging in dialogue with AI void a reasonable expectation of privacy? Can the company then claim that data as its own? Can that search or query be made public? Sold and monetized? When, if ever, should law enforcement require a warrant or subpoena?
These are important questions that Congress must answer, and if Congress is unable to answer them, then state governments may have to take the lead for now. In the meantime, surveillance capitalism and government spying on its own citizens has run amok. At the intersection, Palantir has been commissioned to develop an AI tool to unify the data in every federal database, turning it into useful, easily accessible information.
The Patriot Act massively expanded domestic surveillance. The Bank Secrecy Act ended any claim to privacy in your financial dealings. Most Bureaus of Motor Vehicles monetize the data that citizens are required to provide to drive or obtain a Real ID. Now the government is buying data that would otherwise require a warrant or subpoena — circumventing the Fourth Amendment.
While the original House version of the Big Beautiful Bill included a 10-year moratorium on state or local regulation of AI, the Senate wisely voted 99-1 to remove this provision. The Senate’s removal of the AI provision influenced my decision to support the final passage of the bill. Thankfully, stripping the AI regulatory exemption from the Big Beautiful Bill offers hope of accountability.
Freedom surrendered is rarely reclaimed. Congress may decide to keep things the way they are now with respect to privacy, but they shouldn’t. The Fourth Amendment doesn’t say, “If you have nothing to hide, you have nothing to fear.” Instead, it says the government requires probable cause and a warrant or subpoena to obtain even limited ability to search or seize your property or information. We desperately need to restore a government small enough to fit within the Constitution.
In 2003, the movie iRobot laid out three laws designed in a fictitious future (2035) to safeguard human interaction with their Artificial Intelligence (AI)-controlled robots.
• A robot may not injure a human being or, through inaction, allow a human being to come to harm.
• A robot must obey the orders given it by human beings except where such orders conflict with the First Law.
• A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
With respect to commerce, privacy resolves the fundamental issue of who owns what data. With that resolved, Artificial Intelligence remains challenging. iRobot wasn’t far off the mark — the First Law says no human should come to harm. Today, we lack even that baseline for AI. Perhaps we should add a few rules to that list, but we must define the parameters because the real-world consequences are already upon us.
Hollywood imagined rules in 2003 to keep humans safe from AI. Yet, over two decades later, Congress still has not written any rules to do the same. Even worse, many of my colleagues seem entirely unconcerned with the potential repercussions that could soon become reality. Failure to act is its own decision, but AI is moving far faster than Congress and momentum in the wrong direction makes change harder as time passes. If we don’t act swiftly, our current understanding of what “privacy” means could become a relic.
Warren Davidson represents Ohio's 8th Congressional District.
Comment
I should have...
...been more specific. I was writing about the Zeroth Law of Robotics, added in 1985 in the book "Robots and Empire". It emphasized the priority of humanity over individual lives: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm”. It took precedence over the other three laws.
But getting back to the original article, I think the whole brouhaha about AI is way overblown.
Back in 1961 a British AI researcher named Donald Michie made a bet with a fellow researcher he could make a series of matchboxes and beads "learn" to play what we call Tic-tac-toe. He named the learning machine MENACE (Matchbox Educable Noughts And Crosses Engine). It was just 304 matchboxes and a bunch of beads. If a move resulted in a winning game, that box got a bead. If the move resulted in a losing game then a bead was taken away. It was a slow process, but by playing multiple games eventually the winning move boxes ended up with all the beads. With a lot more boxes and beads you could teach it to play chess.
That's basically what AI is, but with an electronic computer, it can make make a bazillion moves a second.
It's hard to feel threatened by a bunch of matchboxes and beads..
•••Publisher's note: Sen. Bernie Sanders seems very threatened by AI. Maybe AI can replace him. "The artificial intelligence and robotics being developed by multi-billionaires will allow corporate America to wipe out tens of millions of decent-paying jobs, cut labor costs and boost profits." – Sen. Bernie Sanders
https://www.youtube.com/watch?v=dthbi4lzO58
Isaac Azimov...
....introduced the Three Laws of Robotics in a story called "Runaround" ,in 1942. He went on to write dozens of short stories and 5 novels about some of the ethical and moral dilemmas that occur when the 3 laws are ambiguous or contradict each other. What would a robot do if a 5 year old child and a 79 year old criminal were both in danger but only one could be saved? Then throw in that rescuing the child will result in the destruction of the robot, but rescuing the criminal won't.
Hollywood did NOT invent them.
If it's not obvious, I'm a big Asimov fan. And what about the Zeroith law?
•••• Publisher's note: Per AI, "The Zeroth Law of Thermodynamics states that if two systems are each in thermal equilibrium with a third system, then they are in thermal equilibrium with each other, which allows for the concept of temperature and the creation of thermometers. This means that objects in separate contact with a common system will all share the same temperature."