Upwards and in the direction of good
Last week, the "Cyberspace Administration of China" — I put that in quotes because I have a hard time believing that the word cyberspace is in the name of an official state bureau; it's like if the U.S. Department of Commerce had an Undersecretary of the Metaverse — released a set of draft regulations (here in English) addressing tech companies' use of recommendation algorithms and a "10-point plan" to curtail celebrity popularity and fan communities. Apparently there can be only one sanctioned personality cult in China, or maybe two, if Mao still has any credibility.
In some ways, China's algorithm rules seem like a wish list for critics concerned about the power of American tech platforms. As Kendra Schaefer details in this thread, the rules stipulate that consumers must be alerted to when content is being algorithmically sorted for them and what keywords are being used to target them, and they must be given the opportunity to opt out. Price discrimination is forbidden. Certain kinds of keywords are made illegal (presumably efforts would be made to eliminate proxies as well, though that may be logically impossible), and algorithms can't be optimized to maximize engagement, fuel controversy, or encourage compulsive use.
The penalties for violating these rules, Schaefer notes, are somewhat nominal. In many places the language is extremely general (e.g., algorithms must "vigorously disseminate positive energy") and probably unenforceable in any nonarbitrary way. And they are certainly geared toward protecting the state's interests in having a monopoly on propaganda and the ability to dictate which sorts of phenomena can go viral. The CCP has its own content "algorithms" encoded in its bureaucracy, much the way that the mainstream media has "algorithms" embedded in its hiring practices, its institutional memory, its legacies and hierarchies and biases.
Nevertheless, it's interesting what kind of media regulation there can be when profit and greed are not held to be sacrosanct and where caveat emptor is not taken to be the best ethics that a society can come up with. It doesn't seem like the end of free speech as we know it to force platforms to disclose when their algorithms are deployed and what they are optimized for. It seems like a possibly constructive use of state power to give individuals some rights in understanding how they are being characterized and classified by hegemonic business interests. If there were going to be any kind of broad "trust in AI," it might start with something like this.