Here's some output from q4_0 quantization of CodeLlama-7b-Python (first four lines are the prompt):
# prints the first ten prime numbers
def print_primes():
i = 2
num_printed = 0 # end of prompt
while num_printed < 10:
if is_prime(i):
print(i)
num_printed += 1
i += 1
def is_prime(n):
i = 2
while i * i <= n:
if n % i == 0:
return False
i += 1
return True
def main():
print_primes()
if __name__ == '__main__':
main()
It will be interesting to see how the larger models perform, especially after community tuning and with better context/prompting. def primes_upto(limit: int):
"""Generate prime numbers < *limit*."""
# Sieve of Eratosthene
is_prime = [True] * limit
for n in range(2, limit):
if is_prime[n]:
yield n # found prime number
for c in range(n*n, limit, n): # start with square, less values are marked already
is_prime[c] = False # mark composites
if __name__ == "__main__":
from itertools import islice
print(*islice(primes_upto(100), 10)) # -> 2 3 5 7 11 13 17 19 23 29print("1, 2, 3, 5, 7, 11... and so on!
I have been waiting for weeks, and am still waiting, to get access to Llama2 (released a month+ ago), and access to this model goes through the same form, so I'm not very hopeful. Are you getting it from other methods?
Now granted, that's more or less by definition and I don't doubt there's communities and fields where it is considered one, but still shows some of the subtleties at play when using language models.
Current consensus has settled on excluding 1 from the definition, but there are examples of publications well into the 20th century that still included 1 as prime.
Fascinating read about the subject: https://arxiv.org/pdf/1209.2007.pdf
-1 True
0 True
1 True
2 True
3 True
4 FalseCongratulations! You must be that arrogant guy everybody hates interviewing with, the one with the superiority complex.
How about instead of just failing people over literally nothing (wasting everybody's time and money) - just ask the candidate whether they could somehow reduce the search space by utilizing the properties of a prime number?
My suggestion for your next interview: decide to hire them just based on their leetcode score, but invite to the interview just to flex that you're still better at puzzle solving :-D
Perfect
The problem a team that always seeks the optimal solution is that never they get shit done. And that’s rarely optimal in a business context. Your view does not strike me to be nearly as arrogant as it is short-sighted.
I think on a team of one I want the guy who gets it done without thinking. On a team of two I want a guy that’s somewhere in the middle. And on a team of three, that’s when I want my optimiser. Because in the time that guy number 3 has written let’s say 3 files of optimal code, guy number 10 files of not optimal code. And you rarely need guy number 3 to fix all ten, you just need him to fix the one or two files that actually matter.
Clients rarely ask “is this optimal?”. But they always ask “is this done?”.
All three developers have different approaches. All three are assets.
I think on some level then you’re making the same mistake that we could say the “just add one guy” made if your comment is honest- not factoring in (his) speed.
I think code readability, rather than code optimisation is far more important thing to get hung up on in an interview (and is, I must remind some of you, not to be confused with conciseness). And you can see this in the end result. But if you’re following along and the interviewee already knows you know what’s going on because of that, you can see it in smaller ways- it could be as simple as going back and changing a variable name such as open (maybe a function?) to isOpen (almost always a Boolean value).
I think most of us are in this position pretty often where we’re writing and figuring out something at the same time, maybe we just see if it works first and don’t give a name much thought or maybe we change the actual value or type of a variable in the process and the variable name becomes ambiguous. I’d look for small indicators that shoe me that this person is considering readability. But I still don’t necessarily expect it to be as readable as it would be in a more natural setting I mean I think 90% of these sorts of things start off with the person saying “I swear I can type” within the first five minutes of being watched- if they get flustered while being watched that it effects their typing, it certainly also effects their coding as well.
> The Code Llama models provide stable generations with up to 100,000 tokens of context. All models are trained on sequences of 16,000 tokens and show improvements on inputs with up to 100,000 tokens.
Edit: Reading the paper, key retrieval accuracy really deteriorates after 16k tokens, so it remains to be seen how useful the 100k context is.
Meta says later on that they aren't releasing it and give no explanation. I wonder why given how incredible it seems to be.
The github repo associated with that paper is linked below. It links to the paper on arxiv, but also has some data in the repo.
what is the trick to achieve 100k context? They can't just use 100k wide transformer layer, it is cost prohibitive, right?..
https://ai.meta.com/blog/code-llama-large-language-model-cod...
[0] https://github.blog/2023-07-28-smarter-more-efficient-coding...
[1] https://github.com/features/preview/copilot-x
[2] https://github.blog/2023-07-20-github-copilot-chat-beta-now-...
It's extremely good. I keep a terminal tab open with 7b running for all of my "how do I do this random thing" questions while coding. It's pretty much replaced Google/SO for me.
I wonder if we could make such specific LLMs (one that is proficient in all things Rust, another- all things Linux, all things genomics, all things physics modeling etc) and have them talk to each other to collaboratively solve problems.
That would be a crazy future thing! Putting machines truly to work..
There's a quite fresh and active project replicating something similar with herd of llamas: https://github.com/jondurbin/airoboros
There was a similar attempt for Godot script trained a few months ago, and its reportedly pretty good:
https://github.com/minosvasilias/godot-dodo
I think more attempts havent been made because base llama is not that great at coding in general, relative to its other strengths, and stuff like Starcoder has flown under the radar.
[1] https://huggingface.co/TheBloke/CodeLlama-13B-Python-fp16
`ollama run codellama:7b-instruct`
https://ollama.ai/blog/run-code-llama-locally
More models uploaded as we speak:
my questions were asking how to construct an indexam for postgres in c, how to write an r-tree in javascript, and how to write a binary tree in javascript.
GGUF seems not optimised yet, since quantizing with a newer version of llama.cpp supporting the format fails on the same hardware. I expect that to be fixed shortly.
For inference, I understand that the hardware requirements will be identical as before.
ollama run codellama "write a python function to add two numbers"
More models coming soon (completion, python and more parameter counts)Not a bad context window, but makes me wonder how embedded code models would pick that context when dealing with a codebase larger than 100K tokens.
And this makes me further wonder if, when coding with such a tool (or at least a knowledge that they’re becoming more widely used and leaned on), are there some new considerations that we should be applying (or at least starting to think about) when programming? Perhaps having more or fewer comments, perhaps more terse and less readable code that would consume fewer tokens, perhaps different file structures, or even more deliberate naming conventions (like Hungarian notation but for code models) to facilitate searching or token pattern matching of some kind. Ultimately, in what ways could (or should) we adapt to make the most of these tools?
You can, I suppose, contract your code so that it’s context free and uses less tokens, but that makes it more confusing for humans and language models.
Taken to the extreme, you can see obviously with one letter functions and variables like i, j, k the model will be able to infer literally nothing and, thus, produce arbitrary nonsense.
Clearly the solution is to do what we already do to manage complexity which is to decompose large tasks into smaller black box modules with an api where the (large number of tokens) implementation is hidden and not known or relevant to using it.
If you give an LLM a function signature and good description, maybe some usage examples, it doesn’t need the implementation to use it.
Terseness decreases the ability of LLMs to process code; it doesn’t solve context length, and even at best it doesn’t scale.
100k tokens is plenty.
You don’t need to do anything like that.
>100k tokens is plenty.
The context window can be really helpful, in case there is a release of a new library and the user wants to generate code targeting the API of the library. When the training date stops at August 2023, any library released after that date is not known to the engine.
My general opinion in regards to context window, is that 1 trillion tokens context window still may not be enough for all use cases.
You can also probably skip including standard library headers since those will be well known to the LLM through its fine tuning.
Either way, consider that a typical preprocessed C++ file would push against the 100K limit even with some optimizations. You will definitely want to have some middleware doing additional refinement before presenting that file to the LLM.
The method I used to choose which files to feed GPT-4 was embeddings-based. I got an embedding for each file and then an embedding from the instruction + some simple processing to pick the files more likely to be relevant. It isn't perfect but good enough most of the time in medium-sized codebases (not very large ones).
The one thing I started doing because of how I implemented this is make files shorter and move stuff into different files. Having a 1k+ LOC file is prohibitive because it eats up all the context window (although with 100k context window maybe less so). I think it's a good idea to keep files short anyways.
There's other smarter things that can be done (like embed and pass individual functions/classes instead of entire files) so I have no doubt someone will build something smarter soon. You'll likely not have to change your coding patterns at all to make use of AI.
You start a project by defining the task. Then as you iterate, you can add new information to the prompt. But it can be also partially automated - the model can have a view of the file structure, classes, routes, assets and latest errors.
I was really hoping that the one year update of Codex would be that - a LLM that can see deep into the project, not just code, but runtime execution, debugging, inspecting and monitoring. Something that can iterate like autoGPT. Unfortunately it didn't improve much and has weird conflicts with the native code completion in VSCode, you get freezes or doubled brackets.
I think hypergraph is an overlooked concept in programming language theory
A little understated, this is state of the art. GPT-4 only offers 32k.
Is anyone working on a code AI that can suggest refactorings?
"You should pull these lines into a function, it's repetitive"
"You should change this structure so it is easier to use"
Etc
With Cody you can create embeddings for your entire repo, so Cody will have much greater context about your code base and the problems you're trying to solve.
Disclaimer: I just joined Sourcegraph a few weeks ago.
With that said, they have recently changed the architecture, with the local install required, and I have not managed (yet) to get it working with NixOS. Once I have some more time, I will try again - it looks like there will be some hoops to go through. https://nixos.org/manual/nixpkgs/stable/#ssec-pkgs-appimageT...
Kudos to the Source Graph team, Source Graph's original product was nicely thought out and ahead of it's time. Nice to see how the original product gave a nice basis for building out Cody.
Last I heard they are in beta and don't work very well (even on the examples page: the "add types" brush is too strict, since `a` and `b` are checked for `null`, and the "fix simple bug" is a typo)
An instruct model means that you can ask it to do what you want, including asking it to give you refactoring ideas from the code you will give it.
I absolutely love the idea of using one of these models without having to upload my source code to a tech giant.
Edited to add: Though somewhat slow, swap seems to have been a good enough replacement for not having the loads of RAM required. Ollama says "32 GB to run the 13B models", but I'm running the llama2:13b model on a 16 GB MBP.
If there's nay other opinions or thoughts on this, I'd be very happy to learn as well. I have considered the eGPU route connected to a 1L PC such as a thinkcentre m80/90.
Any "eli5" tutorial on how to do so, if so?
I want to give these models a run but I have no powerful GPU to run them on so don't know where to start.
I see the GitHub copilot extensions gets a new release one every few days, so is it just that the way they're integrated is more complicated so not worth the effort?
This works well for me except the 15B+ don't run fast enough on a 4090 - hopefully exllama supports non-llama models, or maybe it'll support CodeLLaMa already I'm not sure.
For general chat testing/usage this works pretty well with lots of options - https://github.com/oobabooga/text-generation-webui/
I assume quantized models will run a lot better. TheBloke already seems like he's on it.
From my experience with github copilot and GPT4 - developers are NOT going anywhere anytime soon. You'll certainly be faster though.
However it is hard to tell how that might pan out. Can such an ML/AI do all the parts of the job effectively? A lot of non-coding skill bleed into the coder's job. For example talking to people who need an input to the task and finding out what they are really asking for, and beyond that, what the best solution is that solves the underlying problem of what they ask for, while meeting nonfunctional requirements such as performance, reliability, code complexity, and is a good fit for the business.
On the other hand eventually the end users of a lot of services might be bots. You are more likely to have a pricing.json than a pricing.html page, and bots discover the services they need from searches, negotiate deals, read contracts and sue each other etc.
Once the programming job (which is really a "technical problem solver" job) is replaced either it will just be same-but-different (like how most programmers use high level languages not C) or we have invented AGI that will take many other jobs.
In which case the "job" aspect of it is almost moot. Since we will be living in post-scarcity and you would need to figure out the "power" aspect and what it means to even be sentient/human.
That's why we're so excited to see these extraordinary advances that I personally didn't think I'd see in my lifetime.
The fear is legitimate and I respect the opinions of those who oppose these advances because they have children to provide for and have worked a lifetime to get where they are. But at least in my case, the curiosity and excitement to see what will happen is far greater than my little personal garden. Damn, we are living what we used to read in the most entertaining sci-fi literature!
(And that's not to say that I don't see the risks in all of this... in fact, I think there will be consequences far more serious than just "losing a job," but I could be wrong)
It would likely impact a far larger swath of the engineering / design industry.
https://huggingface.co/chargoddard/llama2-22b
Theoretically this is an even better size, as it would fit on a 20GB-24GB GPU with more relaxed quantization and much longer context.
Metrics are slightly below 13B, but the theory is that the higher parameter count is more amenable to finetuning. If you search for 22B on huggingface, you can see that frankenllama experiments are ongoing:
Meta says later on that they aren't releasing it and give no explanation. I wonder why given how incredible it seems to be.
I would argue that many teams will have to reevaluate their LLM strategy _again_ for the second time in a week.
I tried Ideogram yesterday and it felt too much like existing generators (base SD and Midjourney). DALLE2 actually has some interestingly different outputs, the problem is they never update it or fix the bad image quality.
I guess since Xcode doesn’t have a good plug-in architecture for this I began experimenting more with a chat interface.
So far gpt-4 has seemed quite useful for generating code, reviewing code for certain problems, etc.
There's also the real-time aspect where you can see that it's wrong via the virtual text, type a few characters, then it gets what you're doing and you can tab complete the rest of the line.
It's faster to converse with when you don't have to actually have a conversation, if that makes sense? The feedback loop is much shorter and doesn't require natural language, or nearly as much context switching.
It might be the quantization or my lacklustre prompting skills affecting it, though. To be fair I did get it to output a little bit of useful code after trying a few times.
Where its a bit shit is when its used to provide auto suggest. It hallucinates plausible sounding functions/names, which for me personally are hard to stop if they are wrong (I suspect that's a function of the plugin)
I haven't yet read the whole paper (nor have I looked at the benchmark docs which might very well cover this) but curious how these are designed to avoid issues with overfitting. My thinking here is that canned algorithm type problems common in software engineering interviews are probably over represented in the training data used for these models. Which might point to artificially better performance by LLMs versus their performance on more domain-specific type tasks they might be used for in day-to-day work.
[1] https://github.com/openai/human-eval
[2] https://github.com/google-research/google-research/tree/mast...
But don't worry, this community moves fast!
You can scream that this is progress all you want, and I'll grant you that these tools will greatly speed up the generation of code. But more code won't make any of these businesses provide better services to people, lower their prices, or pay workers more. They are just a means to keep money from flowing out of the hands of the C-Suite and investor classes.
If software engineering becomes a solved problem then fine, we probably shouldn't continue to get paid huge salaries to write it anymore, but please stop acting like this is a better future for any of us normal folks.
The days of getting paid well for making crud are numbered (which most of us do, even in the most interesting problem spaces).
# need a front end boilerplate that hits a backend with the following end points. REST api for movies catalogue, and a corresponding ui. Oh, unit tests please. Go with a responsive design and also make a React Native version (matter of fact provision it to my iPhone). Decide between Heroku or AWS, set up deploy with git hooks.
# scrape IMDb for initial population of the db
# I think a Reddit like comment system would be good to add, so add it. No upvote/downvote though
# handle user login with google/fb/email
# also make an admin page to manage all this
I guess the technical product designer will be the new unicorn.
This view is critically flawed in two major ways:
1) AI is not anywhere near being able to replace the majority of what developers do on a product team. We are decades away from a PM at Facebook being able to type "make a twitter clone that uses instagram login and can scale to 1 billion" and have an AI just do it.
2) Programming and product work is not zero sum. The more we can do means the more product we can make. It means more products can be made overall. After the loom came out, we simply made more clothes than ever before and in the process created a ton of jobs. We are not at some peak software point where we've completely saturated all humanity's need for software or profitable software and thus tools that increase efficiency don't put us out of work.
And frankly, if we develop the kind of general AI that accept a query like "make a facebook competitor capable of scaling to 10 billion" and simply do it, inventing whatever languages, frameworks, hardware, processors, patterns, methodologies, global datacenters handling global politics and law, etc, etc necessary to accomplish such a task, then so be it. I welcome the overlords!
But the answer to that is to deal with concentration of capital, not to eschew better tools.
It will simply move to a higher level of abstraction.
Remind me, how many programmers today are writing in assembly?
Unfortunately, I don't believe there's a way to stop (or even slow down) this train. We can't defeat it, so the only logical answer is to join it.
It's the classical issue with progress removing jobs. In today's world, since mostly everyone (aside from the capitalists themselves) relies on jobs to survive, barring a complete switch from capitalism (which will not happen in our lifetimes), we're fucked.
Next best thing we can do is to try and democratize it enough so that not only the rich have access to it.
1. As a species decide to never build another LLM, ever.
2. Change the path of society from the unequal, capitalist one it’s taken the last 2-300 years.
3. Give up
I know which I believe in :). Do you disagree?
I see both vscode and netbeans have a concept of "inference URL" - are there any efforts like language server (lsp) - but for inference?
https://ai.meta.com/research/publications/code-llama-open-fo...
In my applications, GPT-4 connected to a VM or SQL engine can and does debug code when given error messages. "Reliably" is very subjective. The main problem I have seen is that it can be stubborn about trying to use outdated APIs and it's not easy to give it a search result with the correct API. But with a good web search and up to date APIs, it can do it.
I'm interested to see general coding benchmarks for Code Llama versus GPT-4.
Imagine a database of "Here is the console error, here is the fix in the code"
Maybe one could scrape git issues with console output and tagged commits.
What?!? No Befunge[0], Brainfuck or Perl?!?
[0] https://en.wikipedia.org/wiki/Befunge
/just kidding, of course!
I got a bridge, but it was the wrong size.
Thanks, in advance.
Asking for purposes of educating non-technologists.
I can see some people fine-tuning it again for general propose instruct.
I do wonder about how much use it'll get, seeing as running a heavy language model on local hardware is kinda unlikely for most developers. Not everyone is runnning a system powerful enough to equip big AIs like this. I also doubt that companies are going to set up large AIs for their devs. It's just a weird positioning.
for now it is :) but with quantization advances etc. it is not hard to see the trajectory.
Pretend you're an uncreative PM on an AI team; what part of Facebook or VR could you feasibly improve by iterating on LLMs? Perhaps the content moderation system... but that would require wrangling with the company ethics comittee and someone else at the company probably already took ownership that idea. You've gotta do something compelling or else your ML engineers are going to run off somewhere else.
If I were to ask my ML engineers about what they wanted to work on, they're going to avoid areas where their model is outgunned (i.e.: chat) and instead prefer lower hanging fruit which generalizes well on a resume (i.e.: "Pioneered and published key innovations in LLM code-generation").
Of course, the alternative answer is that Meta wants to replace all of their jr. developers with GPUs, but I think their leadership is a little too preoccupied with VR to be actively pushing for such a transformative initiative in anything more than a very uninvested capacity (e.g.: "Sure I'll greenlight this. Even if it doesn't pay off I don't have any better ideas")
The end goal of AI isn't to make your labour more productive, but to not need your labour at all.
As your labour becomes less useful if anything you'll find you need to work more. At some point you may be as useful to the labour market as someone with 60 IQ today. At this point most of the world will become entirely financially dependent on the wealth redistribution of the few who own the AI companies producing all the wealth – assuming they take pity on you or there's something governments can actually do to force them to pay 90%+ tax rates, of course.
- Easy plug & play model installation, and trivial to change which model once installed.
- Runs a local web server, so I can interact with it via any browser
- Ability to feed a model a document or multiple documents and be able to ask questions about them (or build a database of some kind?).
- Absolute privacy guarantees. Nothing goes off-machine from my prompt/responses (USP over existing cloud/online ones). Routine license/update checks are fine though.
I'm not trying to throw shade at the existing ways to running LLMs locally, just saying there may be room for an OPTIONAL commercial piece of software in this space. Most of them are designed for academics to do academic things. I am talking about a turn-key piece of software for everyone else that can give you an "almost" ChatGPT or "almost" CoPilot-like experience for a one time fee that you can feed sensitive private information to.
There's a "PrivateGPT" example in there that is similar to your third point above: https://github.com/jmorganca/ollama/tree/main/examples/priva...
Would love to know your thoughts
The concerns are 2 fold - 1. We might inadvertently use someone else’s intellectual property. 2. Someone else might gain access to our intellectual property.
What you are describing would help alleviate the concern about issue 2, but I’m not sure if it would help alleviate the concerns with issue 1.
That said, non-subscription is essential, and that's probably going to be a heavy lift considering how quickly things are evolving.
The only thing I've been able to think is they're trying to commoditize this new category before Microsoft and Google can lock it in, but where to from there? Is it just to block the others from a new revenue source, or do they have a longer game they're playing?
They also don't have the same economic setup and DNA as MS/OpenAI. Large corporate customers don't pay for access to the FB cloud, nor are they likely to -- Ellison has spent years building out Oracle Cloud, and he's on the FB board, for example. And I bet you didn't think of using Oracle's Cloud for your last project.
So, your company DNA is free-to-all social based on ad monetization, with a large bet on metaverse / AR / experiential social compute being next. You aren't a trusted corporate partner for anything but gatekeeping your immense community through ad sales.
And, it's clear you a) have some of the most interesting private social data in the world, including photos and DMs and texts, and b) this AI thing is huge.
A play that doesn't f with your existing corporate structure too much is to build this stuff, give it away, keep publishing, build your AI team internally, and see where it takes you.
This isn't the only play, but I think it's reasonable. It's pretty clear large enterprises are going to need their own, internally built / owned, Foundation models to be competitive in a bunch of arenas in the next decade. In this case, if FB can get a little mindshare, keep the conversation going, and as a sidenote, be a disruptor by lowering Azure/OpenAI revs with open releases at-the-edge, that's probably a strategy win.
If I were in charge of AI strategy at FB, I'd probably double down more on generative AI, and I'd be working hard on realtime multimodal stuff -- their recent very large multimodal speech to text in multiple languages work is good. If a team could eyeball realtime-ish video chat with translations, that would be something the platform has a natural advantage in pushing out. Generative hits existing customers, and metaverse asset creation, which is going to experience radical changes in costs and productivity over the next few years, and impact Oculus 100% no matter what anybody wishes were true.
If their choice right now is not to try to overtly monetize these capabilities but instead commoditize and "democratize" what others are offering it suggests they think that a proprietary monetization route is not available to them. In other words they do not leave money on the table. They think that (at least right now) there is no money on the table that they can get to.
Rather than remaining quiet and isolated, the best alternative - their conjectured thinking goes - is to show up as they do, buying up good will with various stakeholders, maintaining mindshare internally and externally etc.
Assuming that the above reading is correct it still leaves various options as to why they may have come to that conclusion: For example reasoning about the future of this sector they might be thinking that there is no real technical moat and they simply accelerate that reality to gain some brownie points.
It may be also idiosyncratic reasons specific to their own business model (data privacy challenges and how any AI monetization will mesh with all that). The drawback of being the elephant in the room is that there is not much room to move.
The nature of their long game depends on which of the decision branches carries more weight. Maybe it is wait-and-see until others clear up the regulatory hurdles. Or keep the engines running until the real and irreducible added value of LLM algos and the like becomes clear.
Working towards NLU that can solve content moderation once and for all? Contrast with tiktok which is clearly using word filters that are easily worked around with phrases like "un-alived" or "corn".
They want to replace influencers and your friends with chatbots and keep you scrolling through an infinite feed of ads and AI generated content?
We followed a similar model under more duress at Netscape. When you use Firefox that’s the fruit of that effort. It didn’t save Netscape, but Meta has a better and more diversified revenue base.
This compares favorably with Google, which is as likely to cannibalize its search business with generative AI as to create new value for itself.
Thus, for all the gen AI stuff like this, for which Meta doesn't have an obvious path to commercialization, it makes sense to release it publicly. They get plenty of benefits from this - for one, engineers (and smart people generally) who are working on really complex problems like to be able to talk about the work they're doing. If you're picking between jobs at Meta and Google, the fact that Meta's going to release your stuff publicly might well be the deciding factor.
I would also argue that there's an economic incentive. Right now, being seen as an AI company is definitely a positive for your multiple. I think the movement of Meta's stock price over the last 12 months relative to their change in profit and revenue is certainly driven in part by the perception that they're a leader in AI.
Yes. I said it many times. Meta is already at the finish line in the AI race to zero. All the other cloud-based AI models cannot increase their prices given that a $0 free AI model is available to be self-hosted or used on-device for private / compliance reasons.
Cloud-based AI models cannot afford to compete with free or close to free. It costs Meta close to nothing to release a readily available $0 AI model which is good enough for most use-cases that ChatGPT has already done.
> The only thing I've been able to think is they're trying to commoditize this new category before Microsoft and Google can lock it in, but where to from there? Is it just to block the others from a new revenue source, or do they have a longer game they're playing?
Mostly benefits the PyTorch ecosystem which Meta has an active community around it.
AI has no moat, but many players are in denial about this still. Microsoft and other might have tight enough control they can use a product dumping strategy to get people dependent upon their implementation such they can start charging, but that isn't a delusion Meta can have.
That max revenue license they used with the models seemed fairly clever to me. It will seed the environment with players that base their product on Meta tech in return for them being born with a poison pill preventing their use by big players (other than Meta) buying them. This is a long term play that may not really work but it creates the potential for big opportunities. And even if it doesn't work out, denying easy wins for their powerful competitors might be worth the price on its own.
Meta by democratizing AI access is generating more capable developers which will make the Metaverse a reality, where FB leads. They have already realized they have a losing gambit with Google, Apple, Microsoft (also X?) having an antagonistic monopoly against Meta product advancement
because language models are a complementary product, and the complement must be commoditized as a strategy.
I see AMD as a bigger beneficiary, since, very soon, amd will equal nvidia for inference and fine-tuning, but amd has a long way to go to equal in foundation model training.
But they're a partner in Llama too. Why is Microsoft in this space too, how do they benefit?
In the meantime, we are still waiting for Google to show what they have (according to their research papers, they are beating others).
> User: Write a loop in Python that displays the top 10 prime numbers.
> Bard: Sorry I am just an AI, I can't help you with coding.
> User: How to ask confirmation before deleting a file ?
> Bard: To ask confirmation before deleting a file, just add -f to the rm command.
(real cases)
No thanks, going back to Winamp.