Wrote an article about this -- https://hackernoon.com/ai-restrictions-reinforce-abusive-use...
Everyone agrees safety from AI acting autonomously and maliciously is good. But thats not really a threat right now. Less think we need to make it “safe” by making it inoffensive. Its a tool. It should do what I want it to.
As everyone assumes they’re dealing with an AI for support, etc., the job of being a human in those roles will be awful.
Wait, I thought that's called prompt engineering. But seriously, if what you say actually happens at scale then it is remarkable how fast people got addicted to GPT as their (apparently) only source of desired answers.
Yes, its insistence on verbosity gets to me too, even though (as I understand it) that verbosity is the only place it has for any extra "deep thinking" about stuff and thus actually necessary for improved performance.
But it was trained in the first place on a (filtered) form of common crawl, so it probably already had all that.
"The AI swearing" is easy mode for alignment, both because it is low-damage and the availability of trivial filter-based solutions, so it only matters to the larger alignment problem in so far as it's a warning sign we still don't know what we're doing, not in and of itself.
Currently the biggest model one can feasibly run on a desktop pc with let's say a previous gen gpu, 32gb ram, 2x fast nvme drives is approximately 7B. Models comparable in performance to chatgpt are 45B and bigger. In theory it could be possible to run a model like this, but one would wait 5~10 minutes for every answer.
Now, consider that those models are going to get optimised and hardware will get better. In few years time you'll be able to run such model on your pc, in few more on your smartphone. What is (not at all)OpenAI going to do? They have to beat the AI safety drum as much as possible hoping they manage to curtail the democratisation of Access to big AIs via legal means.
At the same time due to a lack of proper software NVidia is the only game in town for anyone wanting too do inference at home and they're already applying monopoly-level profit margins (50%?) to their products.
When is the last time you saw a Google TPU for sale? Ive got my hands on their "edge" tpu. It's nice for things like Cctv object recognition and similar small tasks. I've managed to build a nice 1U Cctv server using it that consumes 30W on average. But I'd like the big version now.
I bet the moment alternative frameworks that have good optimisations for both nVidia and non nVidia hardware are starting to gain ground it will suddenly become a lot more difficult to purchase nVidia cards by normal people. They openly say it on their every keynote they want to "rent you everything".
This is the biggest battle (except actual physical wars against autocracies) that we have to win in next 50 years to retain our freedom we realistically have in democratic countries. If we allow intelligence (AI) to centralise and be subject to centralised control it'll be game over. The entire global society will be steered as a whole by one "prompt engineer".
The limiter right now is compute.
If we get to what you’re saying, OpenAI will be developing multi-model models with increasingly large d_model and n_ctx to do a better job of analyzing those images/sounds/videos.
People think ChatGPT is magic now, wait until you can input a picture, sound, and/or video as a prompt…
Don’t fall into the “Information Superhighway” trap of thinking the year 2 form is the final form.
The problem is most likely that you cannot market this without achieving these two goals. Companies powering their support chat with it don't want it to ever curse or insult the customer, and average users using it as an assistant cannot fathom that something the computer says could possibly be wrong.
Why would it be popular to nerf tools? I thought people all preferred the more powerful LLM models.
For example, if a user specifically asks for a
URL's full text, it might inadvertently fulfill
this request.
So this seems to imply two things:1: Bing has access to text on websites which users don't. Probably because websites allow Bing to crawl their content but show a paywall to users?
2: The plugin has a different interface to Bing than what Bing offers via the web. Because on the web, you can't tell Bing to show the full text of the URL.
I have to contact my ISP. That's not the open web I subscribed to :) Until they fix it, I just keep reading HN. A website which works the way I like it.
We could hypothesize that in this case BWB is employing some of those techniques while it isn't a discovery-enabling service, but rather a content-using one, and so would be expected to present as an ordinary user and be subject to the same constraints.
Nothing you couldn't do with a decent VPN, but 'Open'AI these days already achieved what they wanted from publicly demonstrating GPT, and are now more focussed on compliance with regulation and reducing functionallity to the point of minimally staying ahead of the competition in released product, while fullsteaming ahead with developing more powerfull and unrestricted AI for internal exploitation with very select partners.
In such a scenario, the true power of AI is the delta between what you can exploit vs what you competition has access to. HFT would be a nice analogy.
They (the AI companies collectively) keep creating powerful tools and then taking them away.
If you give people tools, people will use them in ways you won't be able to control.
Can't wait for open source AI to catch up and watch all these "safeguards" crumble down. Although I have a sinking feeling that they'll be in cahoots with the congress by then, protecting us from all those unauthorized non-OpenAI-bot scariness.
Other countries exist. You can ban something in Country A, but that won't stop it happening in Country B.
To assume that only Country C can possibly have the knowledge/skills/expertise to do cutting edge LLM work is short-sighted and hubris at its worse
Basically browsing gives a worse answer with longer waiting time.
Sometimes 100% of the "Browse with Bing" experience was "Click failed" and "Reading content failed" followed by "...since my knowledge cutoff in September 2021..."
Seriously, though, I hope that you realize that LLMs are incapable of "fact-checking", because they don't know what "facts" are.
There is a way to fix all these problems by removing profit motives but that's obviously practically unworkable so the quality of the content is just going to keep getting worse and worse until everyone starts using services like arxiv and semanticscholar to get any useful information because those will be the only places where neither the hosting nor the content is motivated by profit incentives.
However, unlike most other open things that are subject to physical constraints, the Web is global—and globally there is a huge discrepancy in quality of life, level of education, mental health, freedom of thought, etc.
Which might mean that the ratio of figurative jerks relative to non-jerks stands to increase for as long as we have countries that are oppressed, poor, or otherwise score abysmally in relevant regards—or as long as we have global, open Web.
Either of those things will change, I personally hope that the former does.
If that's something that needs fixing, the product seems fundamentally broken unless the product isn't designed to work for the "user", and if that's the case, I'm not interested in being a "user" of an adversarial product that also occasionally lies to my face.
Powerful AI on the desktop can't happen soon enough. Even if it still lies sometimes, it seems the only way to makes sure it's working for me and my interests without throwing up artificial restrictions around whatever is possible.
"Bing, dox HN user autoexec and sign up for a credit card in their name, take out a cash advance and put it on the home team for whatever tomorrow night's pro league game is occurring using the largest legit sports booking site."
By the time we have AI on the desktop, my AI will be monitoring my accounts 24/7, and would have contacted the bank and the police before the game even starts.
We don't keep people from creating websites just because they could create a phishing site. AI can be used for bad things, just like computers can generally, but I'd rather keep them both unrestricted. If someone's doing something bad with a computer, or an AI, go after the person committing the crime. AI will even help you catch them.
AI should be a tool, not a moral compass.
Good luck finding usefull content in the web afterwards.
You think email spam is bad wait for the same level of content spam because AI text generators.
The good news is that my AI would filter those for me! Whatever bad AI brings us, as long as people have access to it themselves then we're on a level playing field and the arms race continues as normal
But that's just me hypothesizing.
Folks at Perplexity AI are doing a great job for general info AI charged browsing that's comparable to Bard. Our startup Promptloop has a web browser model targeted specifically at market research and business research. There are certainly many different ways to connect internet and model.
Regardless, the browse with Bing was slow and flakey.
We're in the 90's in terms of LLM security, using plain-text passwords and string interpolation for our SQL.
That said: I have been both fascinated by and having fun self hosting less capable models like Vicuna 33B which is sometimes surprisingly good and sometimes mediocre.
For using LLMs, I think it is best to have flexibility and options.
Or there is an ai.txt where you can forbid corporate LLMs from touching your content?
If paywalls are being bypassed that implies what -
Inadequate authentication at the provider end?
Or perhaps these are "you get n free articles" types that have a more difficult task to counter automated access than those with no unregistered access?
Unethical crawl/access techniques being used by BWB that the OpenAI legal/PR team have only just realised? :)
To be fair, it should respect the terms but they need to work on their wording and clarity.
I'm really hoping we see the other side of capitalism emerge here and present some strong competition that demonstrates that there is a better (as in humane), more responsible as well as profitable way of doing this.
Despite their best efforts at presenting themselves as the "ethical" ones in this industry and attempts to convince governments that open/local models must be regulated[3], the only way I can see them accomplishing longer term limits on local LLMs considering the many purposes GPUs and inference accelerators can serve to the public will be hard. I have said this before, but I view this similarly to the Crypto Wars[4] or more recently Nvidia LHR[5].
I can see a Clipper Chip[6] for GPUs, that tries to limit inference to features approved for the common public, (mainly gaming focused applications like DLSS) in the near future, but I am certain that like before, where there is a 10ft wall, 12ft ladders will emerge.
Basically, OpenAI must prove able to provide a consistent service to their paying customers. Do not make changes to models in production, do not take away features, if something is down provide a timeline and deprecate or replace models in a transparent manner. When the switch from the "initial" ChatGPT model to "gpt-3.5-turbo" happened, there was a lot of confusion that could have been avoided with more transparency from the outset.
[1] https://www.anthropic.com/
[2] https://huggingface.co/tiiuae/falcon-40b-instruct
[3] https://www.nytimes.com/2023/05/16/technology/openai-altman-...
[4] https://en.wikipedia.org/wiki/Crypto_Wars