The Guardian had an interesting story, ‘New AI fake text generator maybe too dangerous to release‘, about OpenAI’s GPT2 algorithm.
I spent some of today watching social media streams linking to the paper. I do find it intriguing that some of this relies heavily on training on large amounts of data, some 40GB in size, that came from a relatively clean source. According to the Verge, 4Chan was not used.
The text shown on the OpenAI page is interesting and provokes questions about how one might begin to read the text if it was not prompted or highlighted as machine written. It does read slightly stiltedly but the text is certainly interesting.
The size of the training data and the amount of parameters suggest a scale of machine that requires a large amount of storage and processing power. From this, one might infer that the electricity bill is somewhat hefty as well.
I gather that some newspapers also use AI to sift through and do some writing on simple reports, yet I suspect that these go through editing stages. If Hayles (2004) suggest cyborg reading as media-specific analysis (MSA) but how does one develop MSA to think about joint machine and human writing? I have been wondering this as papers about GANs and images have come through. The Digital Ethics Lab at the Oxford Internet Institute (OII) had a project on Deepfake pornography, but the nagging question in my mind is how might we think about the next Hitler Diaries? These were a series of diaries that were purportedly written by Hitler and were verified before being discovered as fakes.
Further questions might be raised about the interaction between human and machine versions of culture. The critical gap (Galloway 2012) within the interface is hidden still further. What does it mean for thinking about how the machine creates its own culture? How might we understand this culture as it diverges from using human seeds?
No Comments