[ad_1]

A sharp-eyed developer at Krita noticed recently that, in the settings for their Adobe Creative Cloud account, the company had opted them (and everyone else) into a “content analysis” program whereby they “may analyze your content using techniques such as machine learning (e.g. for pattern recognition) to develop and improve our products and services.” Some have taken this to mean that it is ingesting your images for its AI. And … they do. Kind of? But it’s not that simple.

First off, lots of software out there has some kind of “share information with the developer” option, where it sends telemetry like how often you use the app or certain features, why it crashed, etc. Usually it gives you an option to turn this off during installation, but not always — Microsoft incurred the ire of many when it basically said telemetry was on by default and impossible to turn off in Windows 10.

That’s gross, but what’s worse is slipping a new sharing method and opting existing users into it. Adobe told PetaPixel that this content analysis thing “is not new and has been in place for a decade.” If they were using machine learning for this purpose and said so a decade ago, that’s quite impressive, as is that apparently no one noticed that whole time. That seems unlikely. I suspect the policy has existed in some form but has quietly evolved.

But the wording of the setting is clear: It may analyze your content using machine learning not for the purposes of training machine learning. As it says in the “learn more” link:

For example, we may use machine learning-enabled features to help you organize and edit your images more quickly and accurately. With object recognition in Lightroom, we can auto-tag photos of your dog or cat. In Photoshop, machine learning can be used to automatically correct the perspective of an image for you.

A machine learning analysis would also allow Adobe to tell how many people were using Photoshop to, say, edit images of people versus landscapes, or other high-level metadata. That could inform product decisions and priorities.

You may very well say, but that language does leave open the possibility that the images and analysis will be used to train AI models, as part of the “developing our products and services” thing.

Make yours look like this. Image Credits: Adobe

True, but Adobe clarified that “Adobe does not use any data stored on customers’ Creative Cloud accounts to train its experimental Generative AI features.” That wording is clear enough, though it also has the kind of legal precision that makes you think they’re talking around something.

If you look closer at its documentation, it does indeed say: “When we analyze your content for product improvement and development purposes, we first aggregate your content with other content and then use the aggregated content to train our algorithms and thus improve our products and services.”

So it does use your content to train its algorithms. Perhaps just not its experimental Generative AI algorithms.

In fact, Adobe has a program specifically for doing that: the Adobe Photoshop Improvement Program, which is opt-in and documented here. But it’s entirely possible that your photos are, through one tube or another, being used as content to train a generative AI. There are also circumstances when it might be manually reviewed, which is a whole other thing.

Even if it isn’t the case that Adobe is harvesting your creativity for its models, you should opt out of this program and any others if you value privacy. You can do so right here at the privacy page if you’re logged in.



[ad_2]

techcrunch.com

Previous article$3.9 billion lost in the cryptocurrency market in 2022: Report
Next articleBitcoin Price Forecasts For 2023 Are In and Here’s What The Bulls and Bears Think