lobsters_chat | Unsorted

Telegram-канал lobsters_chat - LobsterDAO 🦞

-

Main Channel t.me/blockchain_lobsters

Subscribe to a channel

LobsterDAO 🦞

This one is more interesting. Is there any comparison on how long it takes to distill using their network instead of doing it in a single supercluster?

Читать полностью…

LobsterDAO 🦞

-- and —

If you know you know

Читать полностью…

LobsterDAO 🦞

You’re absolutely right that training large models across globally distributed GPUs say, one in Berlin and one in San Francisco is a non-starter because of latency and synchronization overhead. But that’s not how cloud GPU training works in practice. Most AI models are actually trained on cloud GPUs.

When we say “cloud GPUs,” we’re not talking about scattered consumer GPUs across continents. We’re talking about clusters of high-speed GPUs co-located in the same data center, interconnected via NVLink, NVSwitch, Infiniband, or 400G Ethernet. Most of our providers offer clusters where latency is in microseconds, not milliseconds.

In fact, LLAMA 3 and GPT-4 were trained on cloud GPU clusters. Meta used custom infrastructure, OpenAI uses Azure, and others use CoreWeave or AWS’s p4/p5 instances all technically “cloud” but with hyperscale hardware and optimized network fabric.

Читать полностью…

LobsterDAO 🦞

Hey, thanks. Is there a dashboard for your revenue? I couldn’t find it in the Service Fees section of your docs. Assuming all this is what devs paid for compute. Assuming standard marketplace commission of 25%, that is close to $0.5 Billion in GMV processed which is impressive. Can you confirm?

Читать полностью…

LobsterDAO 🦞

(taken from my time working with cloud ingeneers in a top150 MC DePIN company)

Читать полностью…

LobsterDAO 🦞

Take a look at tolk, a new lang for TON smart contracts

Читать полностью…

LobsterDAO 🦞

are we talking running or training?

Читать полностью…

LobsterDAO 🦞

it's impossible technically

Читать полностью…

LobsterDAO 🦞

We already have reached critical mass generating $110+ million in ARR. Most of our clients are AI and gaming companies. There is no shortage of demand due to the the AI boom infact there is a supply problem.

Yeah Coreaweave's issue seems to be in their financials and business model.

You can get a full overview on our business model in docs
https://docs.aethir.com/

Читать полностью…

LobsterDAO 🦞

"more or less" anything you can train with the likes of Prime Intellect—decentralized in the sense of the "distributed." again, majority of them are just "finetunes" upon existing open source models though.

Читать полностью…

LobsterDAO 🦞

Bitcoin, Solana, Sui and yes Aptos. In fact these have better and up-to-date onboarding resources than ETH mainnet has.

Читать полностью…

LobsterDAO 🦞

Boltzmann network are working on an distributed AI inference, but it's still just regular open source models. I don't think there is any "decentralized" model per se.

Читать полностью…

LobsterDAO 🦞

are there any decentralized AI that's not just a slap over trad AI prompt APIs?

Читать полностью…

LobsterDAO 🦞

Are there any dashboards that show total flashloan activity on various EVM chains? Is this something that is tracked?

Trying to understand the size of the market.

Читать полностью…

LobsterDAO 🦞

both on frontend and backend dev

Читать полностью…

LobsterDAO 🦞

... yeah so basically what you do is provide a client connection to a cluster of GPUs owned by a single entity
why call it decentralized training?

Читать полностью…

LobsterDAO 🦞

Yeah let me speak to the BD team and DM you

Читать полностью…

LobsterDAO 🦞

Dunno, maybe because their white paper is a bit of an AI slop, I couldn't really parse how the network actually works. Are the Generator Nodes running the full models and sending back the text generated to the network to be sent back to the user?

Читать полностью…

LobsterDAO 🦞

would love a case study on this :)

Читать полностью…

LobsterDAO 🦞

To train, AI models have to sync their state on each run. They run tens of billions of times for the training process.
a typical AI model takes months to train on GPUs connected to one another by lightning speed cable. if they have to sync on a decentralized manner, if the GPUs are theoreticaly are interconnected at speed of light and one GPU is in the US and the other in Germany, it would take tens of years to even train a llama 3

Читать полностью…

LobsterDAO 🦞

Yeah but they were focused on CPUs instead of GPUs in 2017.

Читать полностью…

LobsterDAO 🦞

Awesome just DM'ed you

Читать полностью…

LobsterDAO 🦞

You definitely can with Aethir we have a slew of clients training AI models. We are the only decentralized provider offering enterprise grade GPUs.

Читать полностью…

LobsterDAO 🦞

Llama variant re: the OG question.

Читать полностью…

LobsterDAO 🦞

Isn't this what Golem was trying to build back in 2017?

Читать полностью…

LobsterDAO 🦞

https://venturebeat.com/ai/nous-research-is-training-an-ai-model-using-machines-distributed-across-the-internet/

Читать полностью…

LobsterDAO 🦞

very good point - what are the best or any way possible to verify this?

Читать полностью…

LobsterDAO 🦞

Are there are data dashboards to show decentalized AI adoption as a whole compared against traditional? Data specifically running training or inference on a globally distributed set of machines that anyone can contribute to, instead of a tightly networked datacenter owned by one company (ie, GCP, MSFT, AWS)

Читать полностью…

LobsterDAO 🦞

Yeah been thinking about this recently in terms of underwriting for RWA backed financing on Qiro.

unique assets such as GPU DePINs which have real world revenues + token incentives - the yield profile is attractive.

Have already spoken to few neoclouds as well, would love to learn more about the operators in Aethir network. Please DM.

Читать полностью…

LobsterDAO 🦞

dev UX on TON is horrible

Читать полностью…
Subscribe to a channel