牛cow
08-28
别吹了
比英伟达GPU快20倍! Cerebras“出炉”全球最快AI推理解决方案
免责声明:上述内容仅代表发帖人个人观点,不构成本平台的任何投资建议。
分享至
微信
复制链接
精彩评论
我们需要你的真知灼见来填补这片空白
打开APP,发表看法
APP内打开
发表看法
{"i18n":{"language":"zh_CN"},"detailType":1,"isChannel":false,"data":{"magic":2,"id":343238683173120,"tweetId":"343238683173120","gmtCreate":1724816975218,"gmtModify":1724816976851,"author":{"id":3555303324340279,"idStr":"3555303324340279","authorId":3555303324340279,"authorIdStr":"3555303324340279","name":"牛cow","avatar":"https://static.laohu8.com/default-avatar.jpg","vip":1,"userType":1,"introduction":"","boolIsFan":false,"boolIsHead":false,"crmLevel":5,"crmLevelSwitch":0,"individualDisplayBadges":[],"fanSize":0,"starInvestorFlag":false},"themes":[],"images":[],"coverImages":[],"html":"<html><head></head><body><p>别吹了</p></body></html>","htmlText":"<html><head></head><body><p>别吹了</p></body></html>","text":"别吹了","highlighted":1,"essential":1,"paper":1,"likeSize":0,"commentSize":0,"repostSize":0,"favoriteSize":0,"link":"https://laohu8.com/post/343238683173120","repostId":2462091174,"repostType":2,"repost":{"id":"2462091174","pubTimestamp":1724814568,"share":"https://www.laohu8.com/m/news/2462091174?lang=&edition=full","pubTime":"2024-08-28 11:09","market":"sh","language":"zh","title":"比英伟达GPU快20倍! Cerebras“出炉”全球最快AI推理解决方案","url":"https://stock-news.laohu8.com/highlight/detail?id=2462091174","media":"智通财经","summary":"智通财经APP获悉,人工智能初创公司Cerebras今日宣布推出Cerebras Inference,据称是世界上最快的人工智能推理解决方案。该公司表示:“Cerebras Inference为Llama 3.18B每秒提供1800个token,为Llama 3.170B每秒提供450个token,比基于英伟达GPU的超大规模云快20倍。”他称,“凭借推至性能前沿的速度和具有竞争力的价格,Cerebras Inference对具有实时或大容量需求的人工智能应用程序的开发人员特别有吸引力。”本月早些时候,Cerebras提交了首次公开募股申请,预计将于今年下半年上市。","content":"<html><head></head><body><p>智通财经APP获悉,人工智能初创公司Cerebras今日宣布推出Cerebras Inference,据称是世界上最快的人工智能推理解决方案。该公司表示:“Cerebras Inference为Llama 3.18B每秒提供1800个token,为Llama 3.170B每秒提供450个token,比基于<a href=\"https://laohu8.com/S/NVDA\">英伟达</a>(NVDA.US)GPU的超大规模云快20倍。”</p><p>Cerebras Inference由第三代晶圆级引擎(WaferScaleEngine)提供动力,同时由于消除了内存带宽障碍,速度更快。Cerebras称其GPU解决方案的推理成本是<a href=\"https://laohu8.com/S/MSFT\">微软</a>Azure云计算平台的三分之一,而使用的功率是微软Azure云计算平台的六分之一。</p><p>该公司表示:“Cerebras通过制造世界上最大的芯片,并将整个模型存储在芯片上,解决了内存带宽瓶颈。”“凭借我们独特的晶圆级设计,我们能够在单个芯片上集成44GB的SRAM,从而消除了对外部存储器和连接外部存储器和计算机的慢速通道的需求。”</p><p>提供人工智能模型独立分析的人工智能分析公司(Artificial Analysis)联合创始人兼首席执行官MicahHill-Smith表示:“Cerebras在人工智能推理基准方面处于领先地位。Cerebras为Meta的Llama3.18B和70BAI模型提供的速度比基于GPU的解决方案快一个数量级。我们在Llama3.18B上测量的速度超过每秒1800个输出token,在Llama3.170B上测量的速度超过每秒446个输出token,这是这些基准测试中的新记录。”</p><p>他称,“凭借推至性能前沿的速度和具有竞争力的价格,Cerebras Inference对具有实时或大容量需求的人工智能应用程序的开发人员特别有吸引力。”</p><p>值得一提的是,这可能会在整个人工智能生态系统中产生连锁反应。随着推理变得更快、更高效,开发人员将能够突破人工智能的极限。曾经因硬件限制而受阻的应用程序现在可能会蓬勃发展,并激发出此前被判定为不可能的创新。不过, J. Gold Associates 分析师杰克·戈尔德 (Jack Gold) 也提出,“但在我们获得更具体的现实基准和大规模运营之前,现在估计它到底有多优越还为时过早。”</p><p>本月早些时候,Cerebras提交了首次公开募股(IPO)申请,预计将于今年下半年上市。该公司最近还任命了两名新的董事会成员:曾在<a href=\"https://laohu8.com/S/IBM\">IBM</a>(IBM.US)、<a href=\"https://laohu8.com/S/INTC\">英特尔</a>(INTC.US)和<a href=\"https://laohu8.com/S/TEF\">西班牙电信</a>(TEF.US)担任高管的格伦达•多查克;以及VMware和<a href=\"https://laohu8.com/S/PFPT\">Proofpoint</a>前首席财务官保罗·奥维尔(Paul Auvil)。</p><p>这家初创公司还向上市迈出了重要的一步,本月早些时候聘请鲍勃•科明(Bob Komin)担任首席财务官。Komin曾在Sunrun担任首席财务官,领导了该公司的IPO流程。他还曾在被雅虎收购的Flurry和被微软(MSFT,US)收购的TellmeNetworks担任CFO。</p><p>Cerebras首席执行官兼联合创始人AndrewFeldman说,“鲍勃在他的职业生涯中一直是一个关键的运营领导者,在几家公司担任创业高管,这些公司发明了重大的技术和商业模式创新,并迅速成长为行业领导者。他在成长期和上市公司的财务领导方面的丰富经验对Cerebras来说是非常宝贵的。”</p></body></html>","source":"stock_zhitongcaijing","collect":0,"html":"<!DOCTYPE html>\n<html>\n<head>\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=utf-8\" />\n<meta name=\"viewport\" content=\"width=device-width,initial-scale=1.0,minimum-scale=1.0,maximum-scale=1.0,user-scalable=no\"/>\n<meta name=\"format-detection\" content=\"telephone=no,email=no,address=no\" />\n<title>比英伟达GPU快20倍! Cerebras“出炉”全球最快AI推理解决方案</title>\n<style type=\"text/css\">\na,abbr,acronym,address,applet,article,aside,audio,b,big,blockquote,body,canvas,caption,center,cite,code,dd,del,details,dfn,div,dl,dt,\nem,embed,fieldset,figcaption,figure,footer,form,h1,h2,h3,h4,h5,h6,header,hgroup,html,i,iframe,img,ins,kbd,label,legend,li,mark,menu,nav,\nobject,ol,output,p,pre,q,ruby,s,samp,section,small,span,strike,strong,sub,summary,sup,table,tbody,td,tfoot,th,thead,time,tr,tt,u,ul,var,video{ font:inherit;margin:0;padding:0;vertical-align:baseline;border:0 }\nbody{ font-size:16px; line-height:1.5; color:#999; background:transparent; }\n.wrapper{ overflow:hidden;word-break:break-all;padding:10px; }\nh1,h2{ font-weight:normal; line-height:1.35; margin-bottom:.6em; }\nh3,h4,h5,h6{ line-height:1.35; margin-bottom:1em; }\nh1{ font-size:24px; }\nh2{ font-size:20px; }\nh3{ font-size:18px; }\nh4{ font-size:16px; }\nh5{ font-size:14px; }\nh6{ font-size:12px; }\np,ul,ol,blockquote,dl,table{ margin:1.2em 0; }\nul,ol{ margin-left:2em; }\nul{ list-style:disc; }\nol{ list-style:decimal; }\nli,li p{ margin:10px 0;}\nimg{ max-width:100%;display:block;margin:0 auto 1em; }\nblockquote{ color:#B5B2B1; border-left:3px solid #aaa; padding:1em; }\nstrong,b{font-weight:bold;}\nem,i{font-style:italic;}\ntable{ width:100%;border-collapse:collapse;border-spacing:1px;margin:1em 0;font-size:.9em; }\nth,td{ padding:5px;text-align:left;border:1px solid #aaa; }\nth{ font-weight:bold;background:#5d5d5d; }\n.symbol-link{font-weight:bold;}\n/* header{ border-bottom:1px solid #494756; } */\n.title{ margin:0 0 8px;line-height:1.3;color:#ddd; }\n.meta {color:#5e5c6d;font-size:13px;margin:0 0 .5em; }\na{text-decoration:none; color:#2a4b87;}\n.meta .head { display: inline-block; overflow: hidden}\n.head .h-thumb { width: 30px; height: 30px; margin: 0; padding: 0; border-radius: 50%; float: left;}\n.head .h-content { margin: 0; padding: 0 0 0 9px; float: left;}\n.head .h-name {font-size: 13px; color: #eee; margin: 0;}\n.head .h-time {font-size: 11px; color: #7E829C; margin: 0;line-height: 11px;}\n.small {font-size: 12.5px; display: inline-block; transform: scale(0.9); -webkit-transform: scale(0.9); transform-origin: left; -webkit-transform-origin: left;}\n.smaller {font-size: 12.5px; display: inline-block; transform: scale(0.8); -webkit-transform: scale(0.8); transform-origin: left; -webkit-transform-origin: left;}\n.bt-text {font-size: 12px;margin: 1.5em 0 0 0}\n.bt-text p {margin: 0}\n</style>\n</head>\n<body>\n<div class=\"wrapper\">\n<header>\n<h2 class=\"title\">\n比英伟达GPU快20倍! Cerebras“出炉”全球最快AI推理解决方案\n</h2>\n\n<h4 class=\"meta\">\n\n\n2024-08-28 11:09 北京时间 <a href=http://www.zhitongcaijing.com/content/detail/1172836.html><strong>智通财经</strong></a>\n\n\n</h4>\n\n</header>\n<article>\n<div>\n<p>智通财经APP获悉,人工智能初创公司Cerebras今日宣布推出Cerebras Inference,据称是世界上最快的人工智能推理解决方案。该公司表示:“Cerebras Inference为Llama 3.18B每秒提供1800个token,为Llama 3.170B每秒提供450个token,比基于英伟达(NVDA.US)GPU的超大规模云快20倍。”Cerebras Inference由...</p>\n\n<a href=\"http://www.zhitongcaijing.com/content/detail/1172836.html\">Web Link</a>\n\n</div>\n\n\n</article>\n</div>\n</body>\n</html>\n","type":0,"thumbnail":"https://static.tigerbbs.com/92a41d20711c9fa4b2aa3cc62ea62948","relate_stocks":{"IE0009356076.USD":"JANUS HENDERSON GLOBAL TECHNOLOGY AND INNOVATION \"A2\" (USD) ACC","IE00BK4W5L77.USD":"HSBC GLOBAL FUNDS ICAV US EQUITY INDEX \"HC\" (USD) ACC","IE00BHPRN162.USD":"BNY MELLON BLOCKCHAIN INNOVATION \"B\" (USD) ACC","IE00B7KXQ091.USD":"Janus Henderson Balanced A Inc USD","BK4550":"红杉资本持仓","IE00BKPKM429.USD":"NEUBERGER BERMAN GLOBAL SUSTAINABLE EQUITY \"A\" (USD) ACC","LU0079474960.USD":"联博美国增长基金A","IE00BK4W5M84.HKD":"HSBC GLOBAL FUNDS ICAV US EQUITY INDEX \"HC\" (HKD) ACC","LU0057025933.USD":"SUSTAINABLE GLOBAL THEMATIC PORTFOLIO \"AX\" (USD) ACC","IE00BJJMRY28.SGD":"Janus Henderson Balanced A Inc SGD","IE0005OL40V9.USD":"JANUS HENDERSON BALANCED \"A6M\" (USD) INC","IE00BDCRKT87.USD":"PINEBRIDGE GLOBAL DYNAMIC ASSET ALLOCATION \"ADC\" (USD) INC","IE0004086264.USD":"BNY MELLON GLOBAL OPPORTUNITIES \"A\" (USD) ACC","IE0004445015.USD":"JANUS HENDERSON BALANCED \"A2\" (USD) ACC","IE00BKDWB100.SGD":"PINEBRIDGE US LARGE CAP RESEARCH ENHANCED \"A5H\" (SGDHDG) ACC","IE00BDRTCR15.USD":"PINEBRIDGE GLOBAL DYNAMIC ASSET ALLOCATION \"ADC\" (USD) INC A","IE00B1BXHZ80.USD":"Legg Mason ClearBridge - US Appreciation A Acc USD","BK4534":"瑞士信贷持仓","IE00BMPRXN33.USD":"NEUBERGER BERMAN 5G CONNECTIVITY \"A\" (USD) ACC","IE00BJLML261.HKD":"HSBC GLOBAL EQUITY INDEX \"HCH\" (HKD) ACC","LU0077335932.USD":"FIDELITY AMERICAN GROWTH \"A\" INC","IE00B3M56506.USD":"NEUBERGER BERMAN EMERGING MARKETS EQUITY \"A\" (USD) ACC","IE00BD6J9T35.USD":"NEUBERGER BERMAN NEXT GENERATION MOBILITY \"A\" (USD) ACC","IE00B19Z9505.USD":"美盛-美国大盘成长股A Acc","IE00B775H168.HKD":"JANUS HENDERSON BALANCED \"A5M\" (HKD) INC","BK4503":"景林资产持仓","BK4585":"ETF&股票定投概念","BK4141":"半导体产品","LU0061474960.USD":"天利环球焦点基金AU Acc","BK4527":"明星科技股","IE00BN29S564.USD":"JANUS HENDERSON BALANCED \"A3\" (USD) INC","GB00BDT5M118.USD":"天利环球扩展Alpha基金A Acc","NVDA":"英伟达","IE00BWXC8680.SGD":"PINEBRIDGE US LARGE CAP RESEARCH ENHANCED \"A5\" (SGD) ACC","BK4554":"元宇宙及AR概念","BK4543":"AI","BK4548":"巴美列捷福持仓","IE00B1XK9C88.USD":"PINEBRIDGE US LARGE CAP RESEARCH ENHANCED \"A\" (USD) ACC","IE00BMPRXQ63.HKD":"NEUBERGER BERMAN NEXT GENERATION CONNECTIVITY FUND \"A\" (HKDHDG) ACC","BK4529":"IDC概念","IE00B5949003.HKD":"JANUS HENDERSON GLOBAL TECHNOLOGY AND INNOVATION \"A\" (HKD) ACC","BK4567":"ESG概念","BK4579":"人工智能","IE00BFSS7M15.SGD":"Janus Henderson Balanced A Acc SGD-H","IE00BYXW3230.USD":"PINEBRIDGE GLOBAL DYNAMIC ASSET ALLOCATION \"AA\" (USD) ACC","IE00B19Z8W00.USD":"FTGF CLEARBRIDGE US LARGE CAP GROWTH \"A\" INC","BK4588":"碎股","IE00BMPRXR70.SGD":"Neuberger Berman 5G Connectivity A Acc SGD-H","IE0034235295.USD":"PINEBRIDGE GLOBAL DYNAMIC ASSET ALLOCATION \"A\" (USD) ACC","BK4532":"文艺复兴科技持仓"},"source_url":"http://www.zhitongcaijing.com/content/detail/1172836.html","is_english":false,"share_image_url":"https://static.laohu8.com/e9f99090a1c2ed51c021029395664489","article_id":"2462091174","content_text":"智通财经APP获悉,人工智能初创公司Cerebras今日宣布推出Cerebras Inference,据称是世界上最快的人工智能推理解决方案。该公司表示:“Cerebras Inference为Llama 3.18B每秒提供1800个token,为Llama 3.170B每秒提供450个token,比基于英伟达(NVDA.US)GPU的超大规模云快20倍。”Cerebras Inference由第三代晶圆级引擎(WaferScaleEngine)提供动力,同时由于消除了内存带宽障碍,速度更快。Cerebras称其GPU解决方案的推理成本是微软Azure云计算平台的三分之一,而使用的功率是微软Azure云计算平台的六分之一。该公司表示:“Cerebras通过制造世界上最大的芯片,并将整个模型存储在芯片上,解决了内存带宽瓶颈。”“凭借我们独特的晶圆级设计,我们能够在单个芯片上集成44GB的SRAM,从而消除了对外部存储器和连接外部存储器和计算机的慢速通道的需求。”提供人工智能模型独立分析的人工智能分析公司(Artificial Analysis)联合创始人兼首席执行官MicahHill-Smith表示:“Cerebras在人工智能推理基准方面处于领先地位。Cerebras为Meta的Llama3.18B和70BAI模型提供的速度比基于GPU的解决方案快一个数量级。我们在Llama3.18B上测量的速度超过每秒1800个输出token,在Llama3.170B上测量的速度超过每秒446个输出token,这是这些基准测试中的新记录。”他称,“凭借推至性能前沿的速度和具有竞争力的价格,Cerebras Inference对具有实时或大容量需求的人工智能应用程序的开发人员特别有吸引力。”值得一提的是,这可能会在整个人工智能生态系统中产生连锁反应。随着推理变得更快、更高效,开发人员将能够突破人工智能的极限。曾经因硬件限制而受阻的应用程序现在可能会蓬勃发展,并激发出此前被判定为不可能的创新。不过, J. Gold Associates 分析师杰克·戈尔德 (Jack Gold) 也提出,“但在我们获得更具体的现实基准和大规模运营之前,现在估计它到底有多优越还为时过早。”本月早些时候,Cerebras提交了首次公开募股(IPO)申请,预计将于今年下半年上市。该公司最近还任命了两名新的董事会成员:曾在IBM(IBM.US)、英特尔(INTC.US)和西班牙电信(TEF.US)担任高管的格伦达•多查克;以及VMware和Proofpoint前首席财务官保罗·奥维尔(Paul Auvil)。这家初创公司还向上市迈出了重要的一步,本月早些时候聘请鲍勃•科明(Bob Komin)担任首席财务官。Komin曾在Sunrun担任首席财务官,领导了该公司的IPO流程。他还曾在被雅虎收购的Flurry和被微软(MSFT,US)收购的TellmeNetworks担任CFO。Cerebras首席执行官兼联合创始人AndrewFeldman说,“鲍勃在他的职业生涯中一直是一个关键的运营领导者,在几家公司担任创业高管,这些公司发明了重大的技术和商业模式创新,并迅速成长为行业领导者。他在成长期和上市公司的财务领导方面的丰富经验对Cerebras来说是非常宝贵的。”","news_type":1},"isVote":1,"tweetType":1,"viewCount":533,"commentLimit":10,"likeStatus":false,"favoriteStatus":false,"reportStatus":false,"symbols":[],"verified":2,"subType":0,"readableState":1,"langContent":"CN","currentLanguage":"CN","warmUpFlag":false,"orderFlag":false,"shareable":true,"causeOfNotShareable":"","featuresForAnalytics":[],"commentAndTweetFlag":false,"andRepostAutoSelectedFlag":false,"upFlag":false,"length":6,"xxTargetLangEnum":"ZH_CN"},"commentList":[],"isCommentEnd":true,"isTiger":false,"isWeiXinMini":false,"url":"/m/post/343238683173120"}
精彩评论