I think there's three components that are needed to have the best (admittedly enticing) experience: Cody itself, Sourcegraph at the background (optionally) and an LLM called Claude from Anthropic. Claude is very much proprietary. Sourcegraph is open core, but to use it as a Cody's "helper" do I need those proprietary features? Without Claude and Sourcegraph Enterprise/Cloud what can Cody do, say with LLama based LLM, should this integration happen?
Again, what I've read, taken at face value, seems really promising. I've used Sourcegraph a few times in the past and sometimes wondered how it would benefit in my commercial work. Having an LLM could make this a next level tool, possibly something that regular chat-type LLM based services don't currently do.
The network dependencies are Cody --> Sourcegraph --> Anthropic. Cody does need to talk to a chat-based LLM to generate responses. (It hits other APIs specific to Sourcegraph that are optional.)
We are working on making the chat-based LLM swappable. Anthropic has been a great partner so far and they are stellar to work with. But our customers have asked for the ability to use GPT-4 as well as the ability to self-host, which means we are exploring open source models. Actively working on that at the moment.
Sorry for any lack of clarity here. We would like to have Cody (the 100% open source editor plugin) talk to a whole bunch of dev tools (OSS and proprietary). We think it's totally fine to have proprietary tools in your stack, but would prefer to live in a world where the thing that integrates all that info in your editor using the magic of AI and LLMs to be open source. This fits into our broader principle of selling to companies/teams, and making tools free and open for individual devs.
Ability to use other LLMs, especially open ones, is promising. I guess it's mostly a matter of how APIs are standardised across these products. I mostly use Copilot and truly hope things can get better than that. Especially the lack of control is infuriating, and tendency to go off on repeats for no discernible reason. On paper Cody looks to do better here.
Good on Sourcegraph to contribute back to the open source community. LLMs rely more on open source code than meets the eye.
We currently have a notebooks UI (e.g., https://sourcegraph.com/notebooks/Tm90ZWJvb2s6MTg1NA==), but the plan is to roll Cody into this and make it a super rich UI for learning/understanding anything and everything about code
An Observable notebook is a standalone program, sort of a cross between a makefile and a spreadsheet. You write code snippets in JavaScript to calculate things and it automatically recalculates them when dependencies change. It's pretty powerful since you can import JavaScript libraries (to draw graphs, for example) and call API's.
Examples: https://observablehq.com/collection/@skybrian/digital-signal...
Sourcegraph is also free to use and downloadable as a local app (https://docs.sourcegraph.com/app) or you can use sourcegraph.com for open source. Our intention is to sell to teams/companies, while making tools for individual devs free to use. There have been a few cases in the past where we've misstepped and come across as selling to individual devs. If this ever happens, please flag to me (https://twitter.com/beyang) or sqs (https://twitter.com/sqs) directly and we'll correct it.
Don't be a car salesman.
There’re recent works show a potential of beating Copilot’s performance (e.g https://arxiv.org/abs/2303.12570) with much smaller models (500M vs 10B+).
Inspired by these work, I’m building Tabby (https://github.com/TabbyML/tabby), a OSS GitHub Copilot alternative. Hopefully it could make low cost AI coding accessible to everyone.