^ I think i covered most of what you say in your first paragraph when i stated the "results are meaningless" to these AI. I should add i don‘t imagine we are close or not close to this future, i only mean to point out this one scenario as a thought-experiment for considering impacts on other types of creative industry.
But, to pursue
that scenario a little more I don’t see it as a problem of
general AI whereas I guess you do? The issues I see are that next generation LLMs and other generative/adversarial/whatever networks need to be able to represent the scientific literature in a sufficiently coherent mathematical/logical form to make cogent hypothesis generation possible (but given sufficient computing power, generating a high proportion of rubbish hypotheses might not matter!), and the matter of ‘sufficient computing power’ itself. In terms of the former, the requirement that scientific writing be precise and careful makes the task a
little easier. In terms of the latter, I agree we aren’t there yet, and I have
no sense of how long it might take to reach it as we‘re not even sure what that level is. Thus, it might be months…or decades…or…
I fear we must agree to disagree on the necessity of “will” for performing science. Why is it necessary? What do you mean by it? BTW, this might arise from you and I being in different scientific fields? (Mine is social science).
I note a widespread tendency to privilege ourselves as ‘special’ in some way. Let me hasten to add that we undoubtedly
are, but whether we are more special than what we
do, and whether what we do is reproducible by other means is the question. ‘Yes it is’ would be the verdict of technology to this point, but let’s not make that an argument about whether
what we do in science is also reproducible by other means.
The appeal to notions such as “will” and “sentience” is to escape into areas so diffuse as to defeat further argument. Wittgenstein would ask ‘to what are we pointing?’.
But let me surmise that you bring in “will” to mean that humans scientific pursuits are goal directed, driven by needs we recognise as a result of a superbly big brain with multi-sensory experience and capacity for emotion navigating a large, complex and ambiguous world. Good point, except I covered this by saying the machine in my thought experiment would ”generate all possible hypotheses”. No goal-setting needed!
By the way, apologies if I’ve set up a straw man here and this is not what you meant at all!! That’s the risk once we get into philosophy I think
Otherwise though I agree with what you write above (I need however to search back to your post concerning the Stanford study). The statistician in me especially likes your characterization of LLMs as a “huge logistic regression over human knowledge”. Well put.