AI, ML, LLM & Related Areas In The World : News &:Discussions.

We're too hasty to celebrate & too quick to condemn anything . Ola is a good example of it .

While Bhavish Aggarwal's priorities need to be questioned , I've seen handles on Twitter who went into raptures praising his foresight for scaling up production & localise battery mfg including battery chemistry barely 3 years ago wish Ola shuts down now for the negative press it's getting thanks to hasty introduction of products into the market.

Bhavish thinks himself as Indian Elon musk and talk in absolutely nonsense in twitter. OLA price advantage simply favorable since even today 100s of vehicles sitting in Dealership waiting for parts. Pump and dump works in Tech industry. Automobile is different gravy and some of the problem OLA have is basically shitty overengineering and haphazard engineering with no QC. Their sales center telling they only sell products is joke.
 


Definitely the clean data helped deepseek but its not all about getting data easily…

deepseek chinis should be praised how much efficiency they have shown by optimising MOE and optimising the use of UTF-8 and used clever math to still bring almost on par precision…

OpenAI, MetaAI, Xai all uses distillation and all used online data without any permissions sought but they never put emphasis to bring efficiency to reduce processing powers.

deepseek is something Indians should have done but no one here used their brains to do so…

praise to CCP and deepseek … chinis although an enemy and behave like a snake with India but remember USA is pure evil entity..

Chini behave like a snake coz its their nature
USA is just a pure evil entity
 
deepseek is something Indians should have done but no one here used their brains to do so…
eliminating potential competition even before it poses a challenge, is a core philosophy of how murican industry does business.

when "industry leaders" whose business and funding depends on murican monies said last year that India does not need to develop own LLM and India should stick to building wrappers. the all knowing IT crowd who generally have an strong emotional opinions on everything under the sky, failed to realise how stupid it was to take these "industry leaders" at their word.

while the SM algorithms kept the IT crowd busy making them believe "gormint is wrong" "system is wrong" "muh roads" "muh pot holes" "muh tax payers" blah blah, the real steal happened right under their asses on their comfy office chairs.

we will know only years down the line, how financially costly this is going to be.

is there any info available on whether and how much chini have trained their models (deepseek or otherwise) on chinese language internally?
 
eliminating potential competition even before it poses a challenge, is a core philosophy of how murican industry does business.

when "industry leaders" whose business and funding depends on murican monies said last year that India does not need to develop own LLM and India should stick to building wrappers. the all knowing IT crowd who generally have an strong emotional opinions on everything under the sky, failed to realise how stupid it was to take these "industry leaders" at their word.

while the SM algorithms kept the IT crowd busy making them believe "gormint is wrong" "system is wrong" "muh roads" "muh pot holes" "muh tax payers" blah blah, the real steal happened right under their asses on their comfy office chairs.

we will know only years down the line, how financially costly this is going to be.

is there any info available on whether and how much chini have trained their models (deepseek or otherwise) on chinese language internally?

not sure which GPU they used but USA claiming they got 50000 H100 smuggled through Singapore to deepseek,
but the way they have utilised cleanest data for pre training and further optimisation of MOE and majority 8 Bit data types(and correction algos that is not public) the processing resources and energy resources can not be a factor of max 1/6 of LLM like GPT4o so even if they have 50k H100 by the published data even hi end gaming GPUs around 7-8k can achieve the results with in 3-4 months that is like 6-7 million USD max…

Even the offline deepseek r1 running perfectly on hi end gaming laptops the largest version due to MOE I suppose …

chinis shud be praised, regarding Indian IT cos they never had vision other than simple equation of more employees means more revenue
 
eliminating potential competition even before it poses a challenge, is a core philosophy of how murican industry does business.

when "industry leaders" whose business and funding depends on murican monies said last year that India does not need to develop own LLM and India should stick to building wrappers. the all knowing IT crowd who generally have an strong emotional opinions on everything under the sky, failed to realise how stupid it was to take these "industry leaders" at their word.

while the SM algorithms kept the IT crowd busy making them believe "gormint is wrong" "system is wrong" "muh roads" "muh pot holes" "muh tax payers" blah blah, the real steal happened right under their asses on their comfy office chairs.

we will know only years down the line, how financially costly this is going to be.

is there any info available on whether and how much chini have trained their models (deepseek or otherwise) on chinese language internally?
remember the openai bossman saying "don't even try competing with us, you simply cannot". he is visiting delhi again.
 
not sure which GPU they used but USA claiming they got 50000 H100 smuggled through Singapore to deepseek,
but the way they have utilised cleanest data for pre training and further optimisation of MOE and majority 8 Bit data types(and correction algos that is not public) the processing resources and energy resources can not be a factor of max 1/6 of LLM like GPT4o so even if they have 50k H100 by the published data even hi end gaming GPUs around 7-8k can achieve the results with in 3-4 months that is like 6-7 million USD max…

Even the offline deepseek r1 running perfectly on hi end gaming laptops the largest version due to MOE I suppose …

chinis shud be praised, regarding Indian IT cos they never had vision other than simple equation of more employees means more revenue


Apparently they used H800 for this. Since it is available they sort of worked out the optimization to ensure maximum efficiency.

Something ISRO do. In tech world Apple iOS always have this advantage where it is optimized superbly.
 
deepseek is something Indians should have done but no one here used their brains to do so…
What should we have done instead? You do realize the biggest USP of "AI" is cost-cutting and is eventually going to result in massive job losses, and no one in their right mind can say the jobs these guys replace can be (re)covered by some other roles in the same numbers! It's not even close; the amount of job losses staring us through the barrel is enormous, and it will create a massive social unrest in years to come. Many of the guys out of a job will simply not be able to keep up—their only option would be self-employment or agriculture :rolleyes:
 

Latest Replies

Featured Content

Trending Threads

Back
Top