Fair use data is one of the key elements of responsible AI. We shouldn’t only care about the quality of the data, but also how it was retrieved (mind you, often there are important connections between the two). In the digital economy, personal data is currency. Platforms like Facebook or Snapchat appear free but, as we are finally becoming aware, they are not. Are we, as users, paying too high of a prize for these services? In this blog post, we wish to show that re-shifting the flows of personal data is possible.
Algorithmic decision-making is increasing rapidly across industries as well as in public services. By default, AI systems such as machine learning or deep learning produce outputs with no explanation or context. As the predicted outcomes turn into recommendations, decisions or direct actions, humans tend to look for justification. Explainable AI (XAI) provides cues to how and why the decision was made, helping humans to understand and interact with the AI system.
When we advocate for privacy, we tend to concentrate on the negative consequences of privacy violations [56; 32; 19; 50]. These portrayals are extremely important, but they paint only one half of the picture. Privacy also brings about net-positive advantages for individuals and organizations. These advantages can act as powerful internal incentives, driving privacy adoption. A key addition to external incentives like regulation and public pressure.
In late April, the European Commission laid out its vision of how AI technologies should be governed. The proposal meshes AI regulation with product safety rules, controls how AI systems should be developed and technically composed, introduces market supervision arrangements and gives the authorities wide-ranging powers to impose potentially crippling sanctions.
At first glance, the proposal seems quite a mouthful. It is complex, appears relatively heavy-handed and imposes a host of stringent requirements on developers. But is there really a reason to be worried?
Jokaisen kasvuhaluisen yrityksen strategiassa on pian aikataulu, milloin organisaatio on AI-valmis – milloin yritys katsoo olevansa valmis tekoälyn laajaan käyttöön. Koska käsitteen merkitys tuntuu edelleen vaikealta hahmottaa, niin yritämme tässä kertoa, mitä AI-valmius meidän mielestämme tarkoittaa.