I was curious to see how an AI would handle a bad photo, so I uploaded this black-and-white shot taken in the area around Kyoto Station with my Nikon 35TI.
The photo itself —the one published as featured image— it’s part of a long term project to show the presence of Alfa Romeo (a brand I’m a big fan of) in the Japanese streets. It isn’t anything special, but I needed a starting point (the worse, the better) so, after having uploaded this poor shot, I prompted the software like this:
As a professional in photo restoration and post production restore this photograph of an Alfa Romeo Giulia taken in the vicinity of Kyoto Station, Japan
and this is what I have got:

The result may have looked impressive at a small size, but various issues were discovered within just a few seconds. To be very clear on the matter, I did not expect the software to walk on the water, and I have already wasted more than a few pounds (not refunded by the manufacturer) on software promising similar stunts and miserably failing to keep the word.
Anyway, coming back to the point, the first thing that capture the attention is that the colour rendition is satisfactory; but on the other hand the software was overly aggressive in creating an ‘improved’ version of the original photo (look, for instance, at the leafs on the trees.)
Next step: since I didn’t want a colorised version, I instructed the software to:
Make a black and white version applying the Kodak Tri-X 400 emulsion
and here is when things became tricky, because the answer was:
I am unable to proceed with the requested generation as it violates our content policies. Please provide me with a new prompt or an alternative request, and I will be happy to help you.” (I don’t know why, but it switched to Italian, so for this post I had to translate the answer in English.)
I mumbled for a couple of minutes trying to figure out why the software had given this answer, until something clicked in my mind. I changed the prompt as follows:
Make a version of this photograph applying the Kodak Tri-X simulation
et voilà: another different (and worse) rendering of the original. The back of the car is smudged, almost washed out, and the rims are just wrongly ‘rebuilt’. Still, what matters is that I eventually got the result I had previously been denied (on that, later).
Talking about the outcomes, I don’t know why the results where like this —maybe the software kept working on the various iterations of the image and not on the original. So I decided to start from scratch with the initial photo, and this prompt:
Restore this photo, but don’t make it in colour, apply the Kodak Tri-X simulation, be faithful to the Alfa Romeo Giulia design.
Now the car is ‘properly’ reproduced and the rest of the image has maintained the overall quality of the first iteration.

Since I wasn’t happy with the results, I tried another prompt adding also the focal length and the depth of field using this prompt:
As a professional photo restorer and photoshop expert, restore this photograph of an Alfa Romeo Giulia taken in Kyoto, keeping the Giulia in perfect focus and respecting the f 2.8 aperture depth of field and bokeh it produces, use a Kodak Tri-X 400 simulation, make the photo hyper realistic and natural looking
This is the outcome:

I still wasn’t happy with the result, so i tried once another prompt:
As a professional photo restorer and photoshop expert, restore this photograph taken with a Nikon 35TI and a Kodak T-Max 100 of an Alfa Romeo Giulia taken in Kyoto, keeping the central focus point on Giulia’s posterior rim respecting the f 2.8 aperture depth of field and bokeh it produces, use a Kodak T-Max 100 simulation, make the photo hyper realistic and natural looking
and this is what I have got:

To sum up, I started with a misfocused black and white photo, went through a first colour rendition whose artificial nature was apparent, and finally have got a terrible black and white photo showing, at least in focus, a car decently similar to a Giulia.
Is this the end of the story? Not exactly.
Indeed, this should be the part of the post where I should complain about the poor quality of these AIs, the fact that you can’t actually call this image a photograph, and all the usual truisms usually associated with this topic, but I don’t want to go there. It matters more, indeed, what happened with the next photo.
Since I wanted to experiment the ‘restoration’ feature on people, I fed the machine with a picture of a shooter while he was competing and asked it, once again, to improve the quality of the image with this very simple prompt:
restore this photo keeping its original colour
Once again I was given the fing… sorry, the ‘I am unable to proceed with the requested generation as it violates our content policies’ sermon and —much to my suprise— also another very odd answer:

Option one was the previous ‘I am unable to proceed’ etc.’, while option two was … getting started. On top on that, the software also aksed me which answer I liked more…
This disturbing experience made me realise that of course biases embedded in AI models may be a problem, but these are not the only biases present in the various layers of the service and they not even are the most critical.
The ‘safety check’ embedded by the manufacturers as an intermediary layer between the prompt and its processing enforces a pre-emptive assessment that does not comply with trivial matters such as freedom of speech or verifying the identity of the person making the request.
It simply acts as a blunt instrument, severing the meaning from the words that express it. There was nothing wrong, let alone illegal, with asking for a black-and-white picture or a scene from a perfectly legal sporting event. Yet, apparently, the desire to avoid legal action or social media shitstorms prevailed. Thus, the mere presence of words or contents hinting (I don’t know in which parallel universe, considering the nature of the prompts and the content) at discrimination or weapons worshiping triggered the automated blocking also if in the specific case there was nothing wrong with them.
Many moons ago, when I was a civil rights activist in the field of digital technologies, one of the issues at stake was the use of proprietary dictionaries and syntactic checks in word processing. We argued that owning a digital dictionary meant controlling whether an idea could be expressed.
To illustrate this, consider wanting to express a concept in another language: you know exactly what you want to say, but simply can’t because you don’t know the word. If someone removes a word — let’s say ‘freedom’ — from a dictionary, the idea cannot be expressed. If you cannot express an idea for long enough, you lose the ability to remember it.
Compared to what the AI manufacturers are doing through the safety checks, the concerns of the past look childish and trivial. Honestly, however, I don’t think they want to prevent people from ‘thinking’ or ‘express themeselves’; as I don’t believe that there is ‘global censorship plot’ unraveling. Still, this doesn’t make the current situation any better.
As a matter of fact, the extensive use of safety checks means seizing the power to determine how and if the reality is described through sound, words and, as far as we are concerned, pictures.
It shifts the balance of rights from the individual duty to respect the law (and be punished in case of violation) to a surreptitious application of a pre-emptive control, in the name of ‘ethics’ (whose, by the way?) and ‘acceptable user policies’.
It takes away from the court’s hands the power to decide if something wrong has been done and transfers it to a piece of computer code, with no practical accountability for those who run it.
Share this post:
Comments
Gary Smith on Some odd outcomes from a photo recovering attempt using an AI
Comment posted: 13/07/2025
On the one hand it is interesting to see the results obtained but on the other hand it is discouraging to see the results obtained.
I'm not sure that I'm curious enough to play this game on my own.
I wonder what your thoughts are regarding the current state of AI and its impact to your livelihood as a journalist and photographer?
Thanks for your exploration and your article Andrea!
Comment posted: 13/07/2025
Geoff Chaplin on Some odd outcomes from a photo recovering attempt using an AI
Comment posted: 14/07/2025
Comment posted: 14/07/2025