mastodonien.de

phpc.social

Zeitpunkt              Nutzer    Delta   Tröts        TNR     Titel                     Version  maxTL
Fr 17.05.2024 00:00:22     4.609      +3      241.864    52,5 PHP Community on Mastodon 4.2.8      500
Do 16.05.2024 00:00:14     4.606      +2      241.517    52,4 PHP Community on Mastodon 4.2.8      500
Mi 15.05.2024 00:00:15     4.604      +2      241.226    52,4 PHP Community on Mastodon 4.2.8      500
Di 14.05.2024 00:00:25     4.602      +3      240.851    52,3 PHP Community on Mastodon 4.2.8      500
Mo 13.05.2024 00:00:25     4.599      +1      240.542    52,3 PHP Community on Mastodon 4.2.8      500
So 12.05.2024 00:00:11     4.598      +2      240.312    52,3 PHP Community on Mastodon 4.2.8      500
Sa 11.05.2024 00:00:21     4.596      +1      240.107    52,2 PHP Community on Mastodon 4.2.8      500
Fr 10.05.2024 00:00:11     4.595      +3      239.865    52,2 PHP Community on Mastodon 4.2.8      500
Do 09.05.2024 00:00:21     4.592       0      239.557    52,2 PHP Community on Mastodon 4.2.8      500
Mi 08.05.2024 00:00:21     4.592       0      239.259    52,1 PHP Community on Mastodon 4.2.8      500

Fr 17.05.2024 12:36

Right now on stage, Enrico Zimuel presents a talk about LLM in , and reminds us how LLM are "based on really big deep learning networks, no one fully understand how it works internally'

Enrico on stage with a slide that says:

LLM

Large Language Model (LLM) consisting of a neural network with many parameters (typically billions of weights or more), trained on large quantities of unlabelled text using self-supervised learning

A message is splitted in tokens

Each token is translated in a number using an operation called embeddings

LLM works by taking an input text and repeatedly predicting the next Token or word

Since it's based on really big deep learning networks, no one fully understand how it works internally

Enrico on stage with a slide that says: LLM Large Language Model (LLM) consisting of a neural network with many parameters (typically billions of weights or more), trained on large quantities of unlabelled text using self-supervised learning A message is splitted in tokens Each token is translated in a number using an operation called embeddings LLM works by taking an input text and repeatedly predicting the next Token or word Since it's based on really big deep learning networks, no one fully understand how it works internally

[Öffentlich] Antw.: 0 Wtrl.: 1 Fav.: 0 · via Tusky

Antw. · Weiterl. · Fav. · Lesez. · Pin · Stumm · Löschen