粉丝2422获赞1.9万

哥,开远不属,他需要八张 ag 百闲卡,你的钱包准备好了吗?就在昨天,马斯克开元的聊天机器人哥啊, 可不是普通的 ai 项目, guax 是一个三千一百四十一参数的混焊专家,模型参数量是个三点五的近两倍, guax 的开源,这意味着你我这样的普通人有机会拥有一个胜过个三点五的聊天机器人, 并且人家老马说了允许商用。不过别急着兴奋,咱们得聊点现实的哇!在字数文件中写道,由于模型规模较大,需要有足够 gpu 内存的机器才能使用视力代码测试模型,那么这个足够的 gpu 到底是多大呢?也不多,实际上也就是八张 ac 一百的显卡就能跑起来。按八张 h one 很准, 目前 ag 一百的显卡一张价格大概是在三十万人民币左右,也就是想要在本地部署 guax 项目的话,你大概需要至少准备两百四十万,并且是还能买到 ag 一百的前提下。最后我想说, guac 的开源是一场科技狂欢,但也是一场惊险的马拉松,他将如何改变 ai 的未来? 我们每个人又该如何参与这场盛宴?这些都是值得深思的问题。不管怎样,马斯克再次证明了科技的边界,只有我们想不到,没有他做不到,你准备好研究未来了吗?

look, i mean grock had not open sourced anything until people pointed out it was a little bit hypocritical and then he announced that grockle open source things this week, and i don't think open source versus not is what this is really about for him whole thing is like unbecoming of the builder and i respect elon is one of the great builders of our time and i know he knows what it's like to have like haters attack him and makes me extra satis doing it toss and i hope that years in the future, we have an amicable relationship oh, he has not seen agi, none of us have seen agi, we cannot build agi, i do think one of the many things that i really love about ilia is he takes agi and the safety concerns broadly speaking you know including things like the impact this is gonna have on society very seriously and we as we continue to make significant progress is one of the people that i spent the most time over last couple of years talking about what this is going to mean what we need to do to ensure we get it right to ensure that we succeeded the mission ilia do not see agi um but ilia is a credit to humanity in terms of how much he thinks and worries about making sure we get this right i think there there is definitely a place for open source models particularly smaller models that people can run locally i think there's huge demand for i think there will be some open source models there will be some close source models this it won't be unlike other ego systems in that way think all of these models understand something more about the world model than most of us give them credit for and because they are also very clear things they just don't understand or don't get right it's easy to like look at the weaknesses see through the veil and say ah this is just this is all fake but it's not all fake it's just some of it works and some of it doesn't work we are not ready to talk about that i mean we work on all kinds of research we have said for a while that we think better reasoning in these systems is in a important direction that we'd like to pursue we haven't cracked the code yet we're very interested in it i don't know that's an honest answer we will release an amazing model this year i don't know what we'll call it i know we have a lot of other important things to release first i did not tweet about that i never said like we're raising seven trillion dollars i mean like once there's like misinformation out in the world i think compute is gonna be the currency of the future i think it will be maybe the most precious commodity in the world, and i think we should be investing heavily to make a lot more compute people talk about how many jobs is they are going to do in five years and the framework that people have is what percentage of current jobs are just going to be totally replaced by on ai doing the job the way i think about it is not what percent of jobs i will do, but what percent of tasks will i do and over what time horizon so if you think of all of the like five second tasks in the economy, five minute tasks the five hour tasks maybe even the five day tasks how many of those can ai do, and i think that's a way more interesting impactful important question than how many jobs say i can do because it is a tool that will work in increasing levels of sophistication and over longer longer time horizons for more and more tasks and let people operate a high level of abstraction so maybe people are way more efficient at the job they do i think there will be a new paradigm for that kind of thinking i can imagine many ways to implement that i think that's less important than the question you were getting at which is do we need a way to do a slower kind of thinking where the answer doesn't have to get like you know it's like like i guess like spiritually you could say that you want an ai to be able to think harder about a harder problem and answer more quickly about an easier problem and i think that will be important i used to love to speculate on that question i have realized since that i think it's like very poorly formed and that people use extremely definition different definitions for what agi is, and so i think it makes more sense to talk about when will build systems that can do capability x or y or z rather than you know when we kind of like fuzzly cross this one mile marker it's like like aging guys also not an ending it's much more hood it's close to the beginning but it's much more of a mile marker than either of those things and but what i would say in the issues of not trying to dodge a question is i expect that by the end of this decade and possibly somewhat sooner than that we will have quite capable systems that we look at and say wow that's really remarkable look i i was gonna, i just be very honest with this answer i was gonna say and i still believe this that it is important that i nor any other one person have total control over open ai or over agi and i think you want a robust governance system i can point out a whole bunch of things about all of our boardroma from last year about how i didn't fight it initially and was just like yeah that's you know the world of the board even though i think it's a really bad decision and then later i clearly did fight it and i can explain the nuance and why i think it was okay for me to fight it later but as many people have observed although the board had the legal ability to fire me in practice it didn't quite work and that is its own kind of governance failure now again i feel like i can completely defend the specifics here and i think most people would agree with that, but it it does make it harder for me to like look you in the eye and say hey the board can just fire me um i continue to not want supervoting control over open ai i'd never have never had it never wanted it even after all this craziness i still don't want it i continue to think that no company should be making these decisions and that we really need governments to put rules of the road in place if we can build a better search engine than google or whatever then sure we should like go you know like people should use a better product, but i think that would so understand what this can be you know google shows you like ten blue links well, like 13 ads and then ten blue links and that's like one way to find information but the thing that's exciting to me is not that we can go build a better copy of google search but that maybe there's just some much better way to help people find and act and on and synthesize information i actually i think chad gbt is that for some use cases and hopefully we'll make it be like that for a lot more use cases i think i'm making extremely trusting person i have always had a life philosophy of you know like don't worry about all of of the paranoia, don't worry about the edge cases you know you get a little bit screwed in exchange for getting to live with your guard down and this was so shocking to me i was so caught off guard that it has definitely changed and i really don't like this it's definitely changed how i think about just like default trust of people and planning for the bad scenarios um, i'm not worried about becoming too cynical i think i'm like the extreme opposite of a cynical person, but i'm i'm worried about just becoming like less of a default trusting person。