> In fall of 2023, for example, without consulting or informing the editors, Elsevier initiated the use of AI during production, creating article proofs devoid of capitalization of all proper nouns (e.g., formally recognized epochs, site names, countries, cities, genera, etc.) as well italics for genera and species. These AI changes reversed the accepted versions of papers that had already been properly formatted by the handling editors. This was highly embarrassing for the journal and resolution took six months and was achieved only through the persistent efforts of the editors. AI processing continues to be used and regularly reformats submitted manuscripts to change meaning and formatting and require extensive author and editor oversight during proof stage
I was the removal of proper nouns some sort of AI safety mechanism?
Thorrez 1 days ago [-]
I don't think the AI removed proper nouns, but rather uncapitalized the proper nouns. It sounds to me like the AI was trying to improve formatting, but instead made formatting worse.
ciconia 1 days ago [-]
The proper question to ask is why is AI employed to edit articles when there's already a team of professional editors in place?
fredfoo 1 days ago [-]
To get them to train an AI to replace them?
16 hours ago [-]
conartist6 2 days ago [-]
This tickles my sense of schadenfreude.
> The mass resignation is the 20th such episode since early 2023, according to our records. Earlier this year, Nature asked, “what do these group exits achieve?”
I'm beyond flabbergasted that people don't yet know the cost of alienating their truest believers with displays of rampant cynicism. To be clear I know nothing about this journal, but I do know that many companies are more eager than ever to see humans as replaceable with language models. Models don't have integrity though. They can slowly destroy your legacy but they won't build it like people will.
mbreese 1 days ago [-]
Might as well add the link for the Nature commentary, which in and of itself, is a pretty interesting read.
Unsurprisingly, the main motivator is greater control. But the article suggests that the underlying motivation is not just control, but rather to provide a better quality journal for their audience. For publisher-owned journals, the editors have limited power... and this is the problem for many.
darkhorse222 22 hours ago [-]
If there's one thing I've learned from executive responses to COVID, it's that unless they are the founder with a personal relationship to the company, executives are short sighted sycophants to the market who hire short sighted sycophants.
It's all a sham. As long as they get paid in the short term they do not care about the long term. Also, if it cannot be measured trivially then it does not exist to them.
0points 16 hours ago [-]
> I do know that many companies are more eager than ever to see humans as replaceable with language models.
Let them do that and watch while they destroy their source of income in the process.
They'll just replace them with AI, or more realistically, "Early Career Researchers" — those in academia with less than 20 years experience (after graduating their PhDs)
labster 1 days ago [-]
The great thing about AI peer reviewers is how fast you get feedback. There’s no need to wait weeks for your paper to be rejected when reviewers G, P, and T will tell you right away your paper isn’t the right fit.
mmooss 1 days ago [-]
What was the status of Journal of Human Evolution in that field? Can it be replaced, and if so, with what?
Protests are a tactic in a fight, part of a strategy for victory; they are not a rain dance that is rewarded with good things from a supernatural power. Do these people have a strategy for winning?
Aurornis 1 days ago [-]
> Do these people have a strategy for winning?
Not contributing to journals they disagree with and moving their time and effort to journals they do agree with is a win.
ninjin 1 days ago [-]
Indeed, in my own area of research there was the resignation from Machine Learning back in 2001 [1] which was one of the final nails in the coffin for publishing with any real restrictions in AI. Notable exceptions would be AAAI and of course regressive forces such as Google DeepMind that insist on publishing in closed journals like Nature, despite the rest of the field keeping their research public and open.
mmooss 1 days ago [-]
How do you know which papers to read? It seems like it would be overwhelming without some filter, and perhaps quality would be lower without editorial standards?
(I'm not advocating for a paywalled journal, but I'm wondering if a free journal designated as the premier one in the field would be useful.)
ninjin 1 days ago [-]
Lower quality without editorial standards? The amount of rubbish I see published with editorial "standards" is enormous (in my own field and others). Personally, I think that quality is better judged as to whether work gets used by others and keeping others in check can be done by encouraging authors to publish papers criticising and invalidating bad research.
As for how I know what to read. I talk to fellow researchers and students plenty, read abstracts and I am senior enough to sniff out rubbish rather quickly, etc. If you want to have an amazing, leading, free/open journal you can look at say the Transactions of the Association for Computational Linguistics [1]. But the entire literature of natural language processing is open these days [2]. For the wider area of research: NeurIPS (formerly NIPS) and ICLR are fully open. AAAI is not, but the quality of what is presented at AAAI tends to be worse than the open ones anyway and as I said earlier, no one of note publishes in closed journals other than Google DeepMind. It should be noted that we are very much a "conference driven" field these days and I know plenty of other fields are not, but I am not fit to comment on their situation.
> The amount of rubbish I see published with editorial "standards" is enormous (in my own field and others).
Every human institution is flawed; that doesn't mean the alternative institution, or no institution at all, is better.
What I'm really wondering is, how can you keep up efficiently? And how do you have more objective standards? The system you describe seems very prone to popularity and political contests, and the ubiquitous Internet mob actions. Critiques by others aren't really useful signals unless you critique the critiques carefully - and who has time?
I'm not saying you have no answer, I'm just trying to understand how it works.
ninjin 13 hours ago [-]
> Critiques by others aren't really useful signals unless you critique the critiques carefully - and who has time?
Well, I take the time and about a quarter of my most impactful papers have been such critiques. How do we encourage it? Well, ICLR (or was it NeurIPS?) a few years ran a replication challenge where if you could replicate a paper you got co-authorship. Not sure how much I love that strategy, but I am sure there are ways to create a sane "economy" around it.
As for whether we are better or worse off with the current state in my own field: I do not know. We end up in some sort of social science-esque argument where we simply can not prove the experiment either way as it can only be run once and have to argue on very shaky grounds (it also does not help that the field is exploding unlikely pretty much no other field ever has, which comes with its own issues). I think I am keeping up and I think that I personally have a decently objective view, but I can not prove that to you. What I can say is that there is not a single scientist around me that is not acutely aware of the problems with the previous and current systems. But given how "bottom up" we are without big beasts like Elsevier around that would have a deep financial interest to enforce the status quo, I believe we will arrive at solutions and faster than we otherwise would. There will be pain, yes, but see for example TACL, ACL Rolling Review, ICLR, etc. These are all initiatives that have been fielded by the community and I would argue two have already been great successes and one is struggling, but, could succeed.
PakG1 1 days ago [-]
The top scholars in a field will know which papers to read and which papers to cite. They'll talk with each other via email, chat groups, and at conferences. If the top scholars are in agreement, it's pretty hard for a journal to maintain its status. Outside of a discipline, laypeople can't tell. But if you're a top scholar inside the discipline, you'll be part of these conversations and you'll know. And thereby so will your discipline colleagues and PhD students who are not yet top scholars.
mmooss 1 days ago [-]
Not if their field continues to use the journal they left, and the protestors are left out in the cold.
Not wasting your career being forced to put out crap seems like a pretty big win to me.
I walked away from what was my dream job, and a large and growing income, after a succession of years in which currents of partner neglect shutdown my ability to move forward at any reasonable pace.
A few years later now, and that big lossy looking move appears to have become a big win.
But it really was a win from the day I quit, in terms of mental health and happiness, no matter how things could have turned out.
1 days ago [-]
more_corn 1 days ago [-]
Seems like unwise use of AI is creating a backlash. We should probably slow down the headlong rush until we can deploy this technology more wisely.
I’m reminded of Apple’s decade long new product lifecycle. They iterate, test, ruminate, repeat till they have finely polished technologies. Sometimes it just comes down to filing the rough edges off, sometimes a re-think is necessary.
viraptor 1 days ago [-]
> repeat till they have finely polished technologies.
They haven't done that step for quite a while now. I feel like people keep repeating that story from years ago. But it doesn't hold anymore unfortunately...
talldayo 1 days ago [-]
Apple's new product lifecycle is on life support, particularly with AI. 10 years ago Apple had the right idea - invest in OpenCL, court AMD and other GPU makers and unify on a competitive, complex GPGPU standard. Nvidia would have stood no chance, even if they continued researching AI.
What baffles me is that Apple abandoned this completely sound theory for a risky (and entirely incorrect) bet on NPU hardware. They left OpenCL to bleed out, they simplified their GPU hardware to specialize functionality better, and ended up putting all their chips on the wrong bet. Now they pay OpenAI to run their models on Apple hardware, and apparently can't even do that without help from Nvidia too.
It'll be interesting to see what future generations make of Tim Cook's leadership. He started the decade so strong picking up right where Jobs left off, and ended the decade with several antitrust lawsuits, ballooning subscription services and the professedly failed launch of one Vision Pro headset. Perhaps Apple needs a decade to file a few of their own rough edges down again.
wtallis 1 days ago [-]
> What baffles me is that Apple abandoned this completely sound theory for a risky (and entirely incorrect) bet on NPU hardware.
It wasn't a sound strategy. OpenCL was too little, too late, even though it was 15 years ago; CUDA was already dominant and OpenCL wasn't better in any way except being available on some hardware that wasn't as good as NVIDIA's, and OpenCL 2.0 a few years later was even more of a failure (NVIDIA basically refused to implement lots of new features and had enough leverage to force OpenCL 3.0 to make everything added after 1.2 optional). By contrast, Apple's NPU solved real problems for the iPhone in the domain of camera and computer vision features, operating within a reasonable power budget. Even today the NPU remains useful and superior to GPUs for some applications.
> they simplified their GPU hardware to specialize functionality better,
What features went missing, and when?
digdugdirk 1 days ago [-]
Do you have any articles/resources for someone to read more about this? I've only been hearing praise for Apple's chip strategy/designs lately, and I'm not up to speed on what you're referring to. I'd love to learn more about a different perspective.
DeepPhilosopher 1 days ago [-]
Seconded
1 days ago [-]
TaurenHunter 1 days ago [-]
That must be because they see AI as the next step in human evolution.
Xen9 1 days ago [-]
The non-optimistic reality seems to be that AI will be the next step in evolution, but not in human one. It's far more complex to create smart AI from human than to create smart AI.
Rendered at 15:42:25 GMT+0000 (UTC) with Wasmer Edge.
I was the removal of proper nouns some sort of AI safety mechanism?
> The mass resignation is the 20th such episode since early 2023, according to our records. Earlier this year, Nature asked, “what do these group exits achieve?”
I'm beyond flabbergasted that people don't yet know the cost of alienating their truest believers with displays of rampant cynicism. To be clear I know nothing about this journal, but I do know that many companies are more eager than ever to see humans as replaceable with language models. Models don't have integrity though. They can slowly destroy your legacy but they won't build it like people will.
https://www.nature.com/articles/d41586-024-00887-y
Unsurprisingly, the main motivator is greater control. But the article suggests that the underlying motivation is not just control, but rather to provide a better quality journal for their audience. For publisher-owned journals, the editors have limited power... and this is the problem for many.
It's all a sham. As long as they get paid in the short term they do not care about the long term. Also, if it cannot be measured trivially then it does not exist to them.
Let them do that and watch while they destroy their source of income in the process.
The industry need to grow up.
Protests are a tactic in a fight, part of a strategy for victory; they are not a rain dance that is rewarded with good things from a supernatural power. Do these people have a strategy for winning?
Not contributing to journals they disagree with and moving their time and effort to journals they do agree with is a win.
(I'm not advocating for a paywalled journal, but I'm wondering if a free journal designated as the premier one in the field would be useful.)
As for how I know what to read. I talk to fellow researchers and students plenty, read abstracts and I am senior enough to sniff out rubbish rather quickly, etc. If you want to have an amazing, leading, free/open journal you can look at say the Transactions of the Association for Computational Linguistics [1]. But the entire literature of natural language processing is open these days [2]. For the wider area of research: NeurIPS (formerly NIPS) and ICLR are fully open. AAAI is not, but the quality of what is presented at AAAI tends to be worse than the open ones anyway and as I said earlier, no one of note publishes in closed journals other than Google DeepMind. It should be noted that we are very much a "conference driven" field these days and I know plenty of other fields are not, but I am not fit to comment on their situation.
[1]: https://en.wikipedia.org/wiki/Transactions_of_the_Associatio...
[2]: https://aclanthology.org
> The amount of rubbish I see published with editorial "standards" is enormous (in my own field and others).
Every human institution is flawed; that doesn't mean the alternative institution, or no institution at all, is better.
What I'm really wondering is, how can you keep up efficiently? And how do you have more objective standards? The system you describe seems very prone to popularity and political contests, and the ubiquitous Internet mob actions. Critiques by others aren't really useful signals unless you critique the critiques carefully - and who has time?
I'm not saying you have no answer, I'm just trying to understand how it works.
Well, I take the time and about a quarter of my most impactful papers have been such critiques. How do we encourage it? Well, ICLR (or was it NeurIPS?) a few years ran a replication challenge where if you could replicate a paper you got co-authorship. Not sure how much I love that strategy, but I am sure there are ways to create a sane "economy" around it.
As for whether we are better or worse off with the current state in my own field: I do not know. We end up in some sort of social science-esque argument where we simply can not prove the experiment either way as it can only be run once and have to argue on very shaky grounds (it also does not help that the field is exploding unlikely pretty much no other field ever has, which comes with its own issues). I think I am keeping up and I think that I personally have a decently objective view, but I can not prove that to you. What I can say is that there is not a single scientist around me that is not acutely aware of the problems with the previous and current systems. But given how "bottom up" we are without big beasts like Elsevier around that would have a deep financial interest to enforce the status quo, I believe we will arrive at solutions and faster than we otherwise would. There will be pain, yes, but see for example TACL, ACL Rolling Review, ICLR, etc. These are all initiatives that have been fielded by the community and I would argue two have already been great successes and one is struggling, but, could succeed.
I walked away from what was my dream job, and a large and growing income, after a succession of years in which currents of partner neglect shutdown my ability to move forward at any reasonable pace.
A few years later now, and that big lossy looking move appears to have become a big win.
But it really was a win from the day I quit, in terms of mental health and happiness, no matter how things could have turned out.
They haven't done that step for quite a while now. I feel like people keep repeating that story from years ago. But it doesn't hold anymore unfortunately...
What baffles me is that Apple abandoned this completely sound theory for a risky (and entirely incorrect) bet on NPU hardware. They left OpenCL to bleed out, they simplified their GPU hardware to specialize functionality better, and ended up putting all their chips on the wrong bet. Now they pay OpenAI to run their models on Apple hardware, and apparently can't even do that without help from Nvidia too.
It'll be interesting to see what future generations make of Tim Cook's leadership. He started the decade so strong picking up right where Jobs left off, and ended the decade with several antitrust lawsuits, ballooning subscription services and the professedly failed launch of one Vision Pro headset. Perhaps Apple needs a decade to file a few of their own rough edges down again.
It wasn't a sound strategy. OpenCL was too little, too late, even though it was 15 years ago; CUDA was already dominant and OpenCL wasn't better in any way except being available on some hardware that wasn't as good as NVIDIA's, and OpenCL 2.0 a few years later was even more of a failure (NVIDIA basically refused to implement lots of new features and had enough leverage to force OpenCL 3.0 to make everything added after 1.2 optional). By contrast, Apple's NPU solved real problems for the iPhone in the domain of camera and computer vision features, operating within a reasonable power budget. Even today the NPU remains useful and superior to GPUs for some applications.
> they simplified their GPU hardware to specialize functionality better,
What features went missing, and when?