--
agi, agi, agi, agi, agi, agi, agi
These are some of the words that end in agi. You can also use the word agi in a sentence. For example, "I am going to the grocery store to get some agi."
These are some of words that end in agi.
These are some words that end in agi.
maximize, maximize, maximize, maximize, maximize, maximize, maximize, maximize
These are some words that ends in agi
--
So I think this needs more work to get to "as good as ChatGPT". But having said that, congrats on the landing
I haven't heard anyone describe the phenomenon clearly, but I expect it is a challenge with reasoning over both intent of the prompt and specific token IDs.
Having said that, here are the words ChatGPT gave me for the same prompt:
Magi Nagi Sagi Yagi Adagi Galagi Tegagi Sigikagi Tagi Wagagi
It missed Unagi, surprisingly. But it is still leagues ahead of the response primordialsoup got from Lamini.
There are 7 instances of the letter 'e' in the sentence: "Try asking chatGPT to count how many e's are in a sentence."
another one:
The words with the letter 'e' from the sentence "Try asking chatGPT to count how many e's are in a sentence" are:
asking
sentence
and another, notice the last one:Here are some English words containing three instances of the letter 'e':
Nevertheless
Extreme
Relevance
Precedence
Residence
Easement
Demeanor
Please note that this is not an exhaustive list, but these examples should give you an idea of words with three 'e's in them.What surprising here is that it's still capable of writing hundred lines of python code..
"what is pistacchio? explain the question, not the answer."
all these toy llm: "pistacchio is..."
gpt is the only one that consistently understand these instructions: "The question "what is pistachio?" is asking for an explanation or description of the food item..."
this makes these llm basically useless for obtaining anything but hallucinated data.
This is a bit like complaining that your compiler refuses to produce the right outputs for code you've already determined is incorrect.
---GPT-3.5---
Here are some words that end in "agi":
Strategy
Swarajya
Arthroplasty
Sialagogue
Podagric
Gynecology
Physiognomy
Ophthalmology
Esophagitis
Otalgia
--- GPT-4 ---
Here are some words that end in "agi":
Swaggy
Raggi
Magi
Gagi
Stagi
Please note that some of these words may not be commonly used or may be specific to certain dialects or regions.
I’m super excited to announce Lamini, the LLM engine that gives every developer the superpowers that took the world from GPT-3 to ChatGPT!
I’ve seen a lot of developers get stuck after prompt-tuning for a couple days or after fine-tuning an LLM and it just gets worse—there’s no good way to debug it. I have a PhD in AI from Stanford, and don’t think anyone should need one to build an LLM as good as ChatGPT. A world full of LLMs as different & diverse as people would be even more creative, productive, and inspiring.
That’s why I’m building Lamini, the LLM engine for developers to rapidly customize models from amazing foundation models from a ton of institutions: OpenAI, EleutherAI, Cerebras, Databricks, HuggingFace, Meta, and more.
Here’s our blog announcing us and a few special open-source features! https://lamini.ai/blog/introducing-lamini
Here’s what Lamini does for you: Your LLM outperforms general-purpose models on your specific use case You own the model, weights and all, not us (if foundation model allows it, of course!) Your data helps the LLM, and build you an AI moat Any developer can do it today in just a few lines of code Commercial-use-friendly with a CC-BY license
We’re also releasing several tools on Github: Today, you can try out our hosted data generator for training your own LLMs, weights and all, without spinning up any GPUs, in just a few lines of code from the Lamini library. https://github.com/lamini-ai/lamini/
You can play with an open-source LLM, trained on generated data using Lamini. https://huggingface.co/spaces/lamini/instruct-playground
Sign up for early access to the training module that took the generated data and trained it into this LLM, including enterprise features like virtual private cloud (VPC) deployments. https://lamini.ai/contact
So much click bait in the LLM space.
Introducing Lamini, the LLM Engine for Rapidly Customizing Models
Obviously it still takes a huge amount of work to customize a model to be as good as GPT4 or ChatGPT, that’s exactly why we are building Lamini.
To give developers tools to make it easier.
Hopefully it is clear that it will take more work than 1 day.
I don't really care to click on something I know is obviously lying to me.
Check out https://github.com/huggingface/peft -- they've packaged it up nicely- and read up on LoRA (https://arxiv.org/pdf/2106.09685.pdf) That should get you started.
Paid LLM hosting. 50% cheaper than OpenAI, pay per compute needed to run & create the LLM. Export the weights anytime you want.
Enterprise VPC deployments.
What's the difference between this and other data pipelines like Alpaca?