[ad_1]
Ive here. I must admit to being late to this particular policy debate and thus stunned by the use of Orwellian language. “Data race”? “Data democratization”?
As you can see from this article, at least some people have realized that big tech companies collect a lot of information about our activities, and to make matters worse, it is close to a winner-take-all game. Those who collect a lot of information can do more analysis simply by virtue of their greater scale, and have more and therefore almost certainly richer, deeper data to mine.
Instead of restricting their collection and use of information, some tech giants are making a completely dishonest offer to ask them to share more. It’s not hard to see that this could be interpreted as a bribe to countries that don’t have NSA surveillance equipment, so what Google or even Facebook has on their citizens would put them ahead of where they are now.
Of course, an important issue is the users themselves. They can be troubled when they see how much Google and Apple and even Facebook know about them. But they are reluctant to go the extra mile in the name of privacy (for example, punching in their current address to find out businesses nearby rather than owning their device), or even snooping on elected officials.
By Maurice Stark, a law professor at the University of Tennessee.Originally Posted in New Economic Thinking Institute website
With the clamor of policy proposals and antitrust enforcement, it looks like tech giants Google, Apple, Meta and Amazon will finally be brought under control. thisNew York Timesfor example, recently heralds Europe’s Digital Markets Act (DMA) is “the most comprehensive tech regulation legislation since the European Privacy Act was passed in 2018”. As Thierry Breton, one of the European Commission’s top digital officials, said in the article, “We are ending the so-called Wild West dominating our information space. A new framework that can be a reference for democracies around the world .”
So, will the DMA and all the other policies proposed by the US, Europe, Australia and Asia make the digital economy more competitive? Maybe. But will they promote our privacy, autonomy and well-being?Not necessarily, as my latest book Disengagement: How to regain control over our data, privacy and autonomy explore.
Today, a handful of powerful tech companies — or data monopolies — hoard our personal data. We failed in several important ways. For example, our privacy and autonomy are threatened when data monopolies turn the path of innovation toward their interests rather than ours (such as research into artificial neural networks that can better predict and manipulate our behavior). Deep learning algorithms currently require massive amounts of data that only a few companies have. The data divide could lead to the AI ??divide, where access to large datasets and computing power is required to train algorithms. This can lead to an innovation gap.As a research paper in 2020 established: “AI is increasingly being shaped by a small number of actors, most of whom are affiliated with big tech companies or elite universities.” The “owners” are data monopolies, with huge datasets, and the top universities they work with; “No” is the rest of the university and everyone else. This division is not due to diligence. Instead, it’s partly down to the university’s access to the massive data sets and computing power of the big tech companies. Without “popularizing” these datasets by providing a “national research cloud,” the authors warn, our innovation and research will be shaped by a handful of powerful tech companies and the elite universities they happen to support.
When data is non-competitive, i.e. when one party’s use does not reduce its supply, more companies can glean insights from the data without compromising its value. As Europe points out, most of the data is either unused or concentrated in the hands of a few relatively large companies.
As a result, recent policies, such as the DMA and Data Act in Europe and the American Choice and Innovation Online Act in the United States, seek to improve interoperability and data portability, and reduce the ability of data monopolies to hoard data. In the process of democratizing data, more companies and nonprofits can gather insights and derive value from data.
Let’s assume that data sharing can add value to the recipient. The key here is to ask how we define value and for whom. Assume that a person’s geolocation data is non-competitive. Its value does not decrease if used for multiple non-competing purposes:
- Apple can use geolocation data to track users’ lost iPhones.
- Navigation apps can use the iPhone’s location to understand traffic conditions.
- Health departments can use geolocation data for contact tracing (to assess whether users have been in contact with someone with COVID-19).
- Police can use this data for surveillance.
- Behavioral advertisers can use geolocation data to profile an individual, influence her consumption, and measure the success of advertising.
- Stalkers can use geolocation data to intimidate users.
While everyone can derive value from geolocation data, individuals and societies will not necessarily benefit from all of these uses. monitor.in a 2019 Surveymore than 70% of Americans do not believe they benefit from this level of tracking and data collection.
More than 80% of Americans and more than half of Europeans in 2019 survey 2016 Survey Concerns about the amount of data collected for behavioral advertising. Even if governments, behavioral advertisers, and stalkers derive value from our geolocation data, benefits optimization solutions do not necessarily share data with them and anyone else who derives value from the data.
Neither is the welfare optimization solution, because break away Explore to encourage competition for data. The fact that personal data is non-competitive does not necessarily indicate the best policy outcome. It does not recommend that data be priced at zero. In fact, “free” granular personal data sets make our situation worse.
When reviewing proposals to date, policymakers and academics have yet to fully address three fundamental questions:
- firstwill more competition necessarily promote our privacy and well-being?
- secondwho owns personal data, is this the right question?
- thirdif personal data is non-competitive, what are the policy implications?
As for the first question, believe we just need more competition. Although Google and Meta have a different business model than Amazon, and Amazon is different from Apple, the four companies are accused of abusing their dominance, using similar tactics, and all four directly (or indirectly for Apple) benefit from behavioral advertising. received a lot of income.
So, healing is more competition.but as break awayExplore, when competition itself is toxic, more competition won’t help. Here, competitors compete to exploit us by discovering better ways to addicted us, reduce our privacy, manipulate our behavior, and capture the remainder.
As for the second question, there has been a long-standing debate on whether to consider the right to privacy as a fundamental, inalienable right, or as a market-based solution (relying on property, contract, or licensing principles). Some have advocated for laws that provide us with a data ownership interest. Others advocate for strengthening California’s privacy laws, spearheaded by real estate broker Alastair Mactaggart; or adopting regulations similar to Europe’s General Data Protection Regulation. But as my book explains, we should reframe the debate from “who owns the data” to “how do we get better control over our data, privacy and autonomy”. Simple tags do not provide ready-made answers. Giving individuals an ownership interest in their data does not address the privacy and antitrust risks posed by data monopolies; it also does not give individuals greater control over their data and autonomy. Even if we view privacy as a fundamental human right and rely on accepted principles of data minimization, data monopolies will still play the system. To illustrate this point, the book explores the significant shortcomings of the California Consumer Privacy Act of 2018 and the European GDPR in curbing privacy and competition violations by data monopolies.
For the third issue, policymakers are currently proposing a win-win situation—promoting privacy and competition. For now, the idea is that with more competition, privacy and happiness will be restored. But that’s only true if companies are racing to protect privacy. In critical digital markets, where prevailing business models rely on behavioral advertising, privacy and competition often collide. As a result, policymakers may fall into several traps, such as opting for greater competition when in doubt.
As a result, we will face market failures and traditional policy responses—defining ownership interests, reducing transaction costs, and relying on competition—will not necessarily work. Taking data away from data monopolies won’t work either—because other companies will just use data to find better ways to maintain our attention and manipulate our behavior (in the case of TikTok). Instead, we need new policy tools to address the myriad risks posed by these data monopolies and the harmful competition posed by behavioral advertising.
The good news is that we can fix these problems. But it requires more than what DMA and other policies currently provide.It requires policymakers to appropriately adjust privacy, consumer protection and competition policies to avoid the ensuing competition about us (we are the product), but actually for us(improving our privacy, autonomy and well-being).
[ad_2]
Source link