• XSS.stack #1 – первый литературный журнал от юзеров форума

A barrier to Entry to Create AI Models//Question//Debate

imnotexist

RAID-массив
Пользователь
Регистрация
16.05.2023
Сообщения
60
Реакции
19
It seems without an enterprise amount of resources available to you to train an AI model, as I understand we ALL use, no matter the AI, a base model that was trained via OpenAI.

I have seen the smaller bots, you can train based on having it doing the same tasks you want it to do then make it a more sophisticated model.

My question is: How compromised are all our AI models based on the engineering from OpenAI due to this practice?


I am trying to understand this issue more, forgive my ignorance.
 
It seems without an enterprise amount of resources available to you to train an AI model, as I understand we ALL use, no matter the AI, a base model that was trained via OpenAI.

I have seen the smaller bots, you can train based on having it doing the same tasks you want it to do then make it a more sophisticated model.

My question is: How compromised are all our AI models based on the engineering from OpenAI due to this practice?


I am trying to understand this issue more, forgive my ignorance.
You can train your own neural networks (Deep learning) at your house for a great amount of tasks, there is nothing that OpenAI can change about this, your question make more sense we you are talking about LLM (large language models), since for this its really needed a great amount of resources, but even on this field OpenAI is not unique, since others enterprises like Facebook, Google etc have their own LLM implementations. You can also run a LLM at your house or rented servers but of course it will take much more time to train and got some cools results.

A important thing to keep in mind is that LLM is not a full panacea which solve all problems, AI itself have a lot of different approachs which can be good for different kind of tasks. A simple example is that if you want to classify something or even segment the image you will get the best results using convolutional networks instead of LLMs. And you can train convolution networks if you have data, labels and a mid level GPU.
 
we ALL use, no matter the AI, a base model that was trained via OpenAI.
Not at all. Most used text producing LLMs are based on LLama2 which is trained by Meta and was released by the Zuk specifically to plunger OpenAI.
 
Speaking of OpenAI over 75% of staff willing to QUIT if the board isn't fired and he isn't reinstated LOL

Best yes this is the context of LLM.

Microsoft just GOATed the entire board of OpenAI.

If they all go to Microsoft, what a powerhouse arm that will create.
 
Speaking of OpenAI over 75% of staff willing to QUIT if the board isn't fired and he isn't reinstated LOL
They are already back, Sam Altman will be restored and Sutskever is very sorry. Even Elon is puzzled why, rumors are they actually achieved AGI in September.
 
Good for you. On the other hand, an OpenAI insider who previously correctly predicted and leaked some information about the company says they did.
Good for me in which aspect? LLM(and others) perform very bad on math it can't even do better than old Computer Algebra Systems to simplify some expressions without introducing errors. It also happens for code, it produce small errors which can be corrected easy but even a kid would be smart enough to avoid those kind of errors.
 


Напишите ответ...
  • Вставить:
Прикрепить файлы
Верх