Everyone agrees that children should be protected online, similar to the special position they have IRL. The principles formulated by the UK Information Commissioner’s Office seem appropriate.
The problem lies in the implementation. Tech companies that offer services likely to be accessed by a child will have to take account of the ICO principles.
The UK information commissioner, Ms. Denham, clarifies that the final code suggests that instead of trying to determine a user’s age, online services could just apply the standards for children to all users.
Applying the standards to all users goes beyond what is specified by law and is an unnecessary curbing of adult autonomy.
Probably the fear of Ms. Denham is that determining a user’s age equals requiring a user’s to reveal their full identity. This fear is unfounded.
In line with the principles of privacy-by-design treating an end-user’s age brackets (being over or under an age threshold) as an attribute suffices to establish whether an end-user is a minor or not. Like f.i. IRMA does.
The next step could be maybe to use a structured data markup age metatags (think: Google shopping) for age-sensitive product items, service items and content items.
End-users with an under the age threshold attribute would trigger services to hide items not-suited for minors.
Peer-to-peer items not suited for minors should be detected by AI and end-user flagging and then decided upon by human editors, just as happens now with illegal content. This system is far from perfect but it could be a base to build upon.
In extreme cases institutions like the police could be alerted. A procedure should then be in place that forces services to change an item status within a very limited period of time when prompted by the police.