2) Custom silicon. Open source tools & decentralization of fab tech (by countries not wanting to be subject to international trade problems... as well as Moore’s Law slowing) are gonna make this like making a PCB.
3) Electric airplanes. With wings. “Drones” as VTVL electric helicopters have been big for a decade, but there are physical limitations to that approach. if you want to see the future, look at drone delivery in Rwanda by Zipline. But also, I think we’ll see passenger electric flight start small and from underserved regional airports sooner than you think, doing routes and speeds comparable to High Speed Rail (and with comparable efficiency) but much more granular and without massive infrastructure.
4) Small tunnels by The Boring Company and probably competitor companies with similar ideas. When you can build tunnels for one or two orders of magnitude cheaper than today, cheaper than an urban surface lane, then why NOT put it underground? And they’ll be for pedestrians, bikes, and small subways like the London Underground uses. Gets lot of hate from the Twitter Urbanist crowd, but what TBC has done for tunneling cost (on the order of $10 million per mile) is insanely important for how we think about infrastructure in the US.
5) Reusable launch technology. The world still doesn’t grok this. They will. It (long term) enables access to orbit comparable to really long distance airfare or air freight.
6) Geoengineering. I’m not happy about it, but it’s insanely cheap so... it’ll probably happen.
If you mean what I think you mean (tiny marks on everything that encode information to help computers figure out what they're looking at), I agree. In particular, I've long been waiting for someone in self-driving sphere to give up on trying to crack the problem with just imaging the world as it is. In a saner world, countries would already be standardizing machine-readable markers on roads and posts and traffic signs. I'm still hoping someone will wake up and make use of this "cheat code".
Assuming such tunneling is in fact practical, I vote for burying the vehicles and letting the pedestrians have the surface...
https://austingwalters.com/chromatags/
Imagine encoding virtual objects or NPCs into a fiducial without a database... basically a 3D model + actions into a piece of paper you can attach anywhere.
Didn’t Las Vegas 1.5 mi tunnel cost ~$50MM ? I don’t think 1.5mi of surface road (not considering right-of-way costs) costs that much.
I worked at ULA for a year about five years ago. At that time they were arguing that it wasn't going to be cost effective. Back then though there may have been one or two SpaceX landings.
since I left I haven't kept up with this debate at all. Do you know if ULA changed their stance after all the successful launches?
There are unsolved physical limitations to that with no solution in the near horizon AFAIK, the energy density of batteries is simply to low (energy per kg) for airplanes to be efficient.
> from underserved regional airports sooner than you think, doing routes and speeds comparable to High Speed Rail (and with comparable efficiency) but much more granular and without massive infrastructure
I didn't understand that, is there a limitation for using normal jet fuel planes from regional airports?
I know about libresilicon [1], are there any others in this space?
New software development is 99% aimless churn.
We don’t need more new tech. We need better applications of old tech. There is so much software that works perfectly fine already. What’s missing is connecting it to real world problems.
"Everything great was created in the '80s, and we've been rediscovering those things every ten years since."
I'm not firm on "the '80s" - maybe this stuff is older than I think - but I think the principle still holds. If it's a problem today, somebody probably thought about it before, and then others came around and wrapped things differently.
It's not BAD to wrap things differently, but the old stuff had more of the sharp corners sanded off, and sometimes we lose that battle-hardened aspect when we rewrite code.
Except for garbage collection/whatever is happening with memory safety today. That's the good stuff.
Like looms.
Robotics is actually generally pretty slow. Regular (serial) robot arms are usually significantly slower than a human arm. Some parallel robots (ie where the motors are mostly stationary and don’t have to be waved around by other motors), like a SCARA or Delta robot, can go about 2-3x the speed of a human, but the difference isn’t massive (60 vs 150 picks per minute?).
But looms are insane. Their task is simpler, but they can do over 2000 picks per second (!). The yarn in air jet looms can be moving over 200 mph. And even mechanical looms like Rapier looms or projectile looms are super fast. The mechanisms are also super advanced and hard to wrap your mind around. Centuries of optimization of the first really good industrial automation instance will do that, I suppose.
It makes me think we haven’t reached a completely flat plateau in mechanical development. Our robots today are actually pretty primitive compared to where they could... where they really should be. It also shows just how hard I think a lot of futurists have underestimated human mechanical capability. Human dexterity and force density is crazy impressive. Humans are actually super strong, fast, AND precise.
And hard automation like looms are also underestimated vs “robot arms.” Hard automation is so much more effective if you can do it. Just robot arms aren’t that great vs people
* Better in this case would be a fundamental design to prevent spoofing, provide S2S encryption maybe E2E encryption, fixing MIME typing issues, fixing Rich Text/HTML display, etc. Basically an actually good faith replacement of e-mail instead of a vendor co-opting.
"The best under-the-radar car? It's a horse and buggy I tells ya!" Every one of these posts on HN has to have a hot take that's contrarian.
And sometimes the tech we need is "out there" and has been for a while, but just hasn't hit "critical mass" yet.
Take @kroltan's answer. I am also extremely bullish on RDF, Wikidata, and the like. But most of this stuff is pretty old now, especially in "Internet years". Which leads, of course, to the question of where the line is between incremental refinement of "old tech" and actual "new tech" as a discrete thing.
We need to find more/better ways of integrating people with tech.
Tech on its own is 10% of the solution. Integrating humans with technology is underrated.
(I say this as the owner of various enterprise SaaS businesses but I'm sure it applies in all aspects of software)
https://medium.com/@adamagb/nintendo-s-little-known-product-...
In some circles, you might even be accused of being a boomer for using SQL. I think a lot of developers are missing out on just how much runway you can get out of SQL and libraries like SQLite. You would also be missing out on one of the greatest breakthroughs in the history of computer science with regard to our ability to model problem domains and perform inhuman queries against them with millisecond execution times. But hey, maybe machine learning and mongodb are working for your shop.
The final thing a lot of people miss are old ideas. Put your entire application on a single server somewhere, and all of its dependencies live in the same box. Optimize the vertical before you rewrite for horizontal, because 99% of the time you will go bankrupt before you get as big as Netflix so it wont matter anyways. Plus, you would go bankrupt faster anyways by chasing delusions of web-scale grandeur when you could have had the MVP done 3 years ago with just a simple SQLite database back-end and a T3a.micro. More likely than not you would have discovered it was a bad idea to start with and could more quickly move on to the actual thing you should have been focusing on.
Emacs and org-mode (and many things GNU) have started to make more and more sense to me in this day and age.
I suppose you're writing this on a 1982 Commodore 64...
“Better application” + “old tech” ≡ “new tech”.
Group permission, PAM, SSO, etc. It's like these developers have never been exposed to Active Directory ever in their life...
2. Semantic sysadmin to declare your intent in regards to your infrastructure no matter how it is implemented (i.e. with a standard specification, interoperability/migration becomes possible) https://ttm.sh/dVy.md
3. GUI/WebUI CMS for contributing to a shared versioned repository. Sort of what netlify is doing, but using a standard so you can use the client of your choice and we tech folks can hold onto our CLI while our less-techie friends can enjoy a great UI/UX for publishing articles to collective websites.
4. Structured shell for the masses. Powershell isn't the worst, but in my view nushell has a bright future ahead. For the people who don't need portability, it may well entirely replace bash, Python and perl for writing more maintainable and user-friendly shell scripts. https://nushell.sh/
5. A desktop environment toolkit that focuses on empowering to build more opinionated desktops while mutualizing the burden of maintenance of core components. Most desktop environments should have a common base/library (freedesktop?) where features/bugs can be dealt with once and for all and we don't have to reinvent the wheel every single time. Last week i learnt some DE folks want to fork the whole of GTK because it's becoming too opinionated for their usage, and GNOME is nowadays really bloated and buggy thanks to javascript hell. Can't we have a user-friendly desktop with solid foundations and customizability?
I'd love to get more of your thoughts around how PowerShell might be more useful for the kinds of scenarios you're thinking about. We see a lot of folks writing portable CI/CD build/test/deploy scripts for cross-platform apps (or to support cross-platform development), but we're always looking to lower the barrier of entry to get into PowerShell, as it can be quite jarring to someone who's used Bash their whole life (myself included).
Structured shells have so much potential outside of that, though. I find myself using PowerShell to "explore" REST APIs, and then it's easy to translate that into something scripted and portable. But I'd love to get to a place one day where we could treat arbitrary datasets like that, sort of like a generalized SQL/JSON/whatever REPL.
Plus, PS enables me to Google regex less :D
Admittedly, i have no idea why we even need to do that nowadays, but that seemed to work.
Did you have some specific tool in mind? Because I completely agree that this is a great way of working with content. We have been doing that for a couple of months with our own tool. It uses Git and stores content in a pretty-printed JSON file. Techies can update that directly and push it manually. Content editors can use our tool to edit and commit to Git with a simple Web UI. Would that go into a direction you were thinking of?
I wonder if shell stuff would work better in a notebook like environment.
Edit: At least one exists: https://shellnotebook.com/
Really? Can you elaborate a bit on the why? As far as I can tell, GNS has been around as a proposal for years and has gained no traction.
The way we treat allergies today, with Zyrtec and Claritin, is medieval medicine. It doesn't solve the underlying problem; it just tries to cover it up.
Allergy immunotherapy is the future. Most people don't realize that allergies are now a curable disease. In the future, taking Claritin for allergies is going to seem like taking Tylenol for an ear infection. Why would you treat the symptoms when you could just cure the disease?
I started Wyndly (https://www.wyndly.com) to bring immunotherapy for pollen, pets, and dust across the USA. But we'll expand into food allergies soon, too.
Besides the delivery vehicle, what are the differences between allergy shots and these droplets?
Have any data to back up this substitute?
Also, Zyrtec and Claritin did nothing for me, I’m an Allegra guy
What my doctor told me was -- you're going to get increasing doses a few times a week, it will take a lot of time be hard, and at some point you'll bump into an adverse reaction while you're in the waiting room after the shot.
The medical system was kind of broken wrt to my plight.
After consulting a number of folks, I finally found EPD and went to treatment.
https://en.wikipedia.org/wiki/Enzyme_potentiated_desensitiza...
It was really helping, then they stopped offering it in my area. I was pretty bummed they did away with it, because it helped me without side effects. My symptoms decreased in severity and eventually I felt fine. Apparently it was from the UK and worked well there.
I hope I'm wrong!
Looks like the innovation here is the move the serum from intradermal to a liquid, oral treatment?
(source: I have had immunotherapy)
In SPARQL you write statements in the form
<thing> <relation> <thing>
But the cool part is that any of those three parts can be extracted, so you can ask things like "what are the cities in <country>", or "what country harbors the city <city>", but most importantly, "how does <city> relate to <country>".For example, if you wanted to find out all the historical monuments in state capitals of a country (using my home country as an example, also pseudocode for your time's sake):
fetch ?monumentName, ?cityName given
?monument "is called" ?monumentName.
?monument "is located within" ?city.
?city "is capital of" ?state.
?city "is called" ?cityName.
?city "is located within" "Brazil"."Too powerful" doesn't seem like a thing until you realize it undermines DBA's skill investments, means business level people have to learn something and solve their own problems instead of managing them, disrupts the analyst level conversations that exist in powerBI and excel, seems like an extravagent performance hit with an unclear value prop to devops people, and gives unmanageable godlike powers to the person who operates it. (this unmanagability aspect might be what holds graph products back too)
If you don't believe me, the list of companies who use them also get a rep for having uncanny powers because of their graphs, FB, twitter, palantir, uber, etc.
Using ML to parse and normalize data to fit categories in RDF graphs is singularity-level tech, imo and where that exists today, I'd bet it's mostly secret.
c1
{
// Marked base resource identifiers used for concatenation.
"resources" = [
&people:@"https://springfield.gov/people#"
&mp:@"https://mypredicates.org/"
&mo:@"https://myobjects.org/"
]
// Map-encoded relationships (the map is the subject)
$people:"homer_simpson" = {
/* $mp refers to @"https://mypredicates.org/""
* $mp:"wife" concatenates to @"https://mypredicates.org/wife"
*/
$mp:"wife" = $people:"marge_simpson"
// Multiple relationship objects
$mp:"regrets" = [
$firing
$forgotten_birthday
]
}
"relationship statements" = [
&marge_birthday:($people:"marge_simpson" $mp:"birthday" 1956-10-01)
&forgotten_birthday:($people:"homer_simpson" $mp:"forgot" $marge_birthday)
&firing:($people:"montgomery_burns" $mp:"fired" $people:"homer_simpson")
// Multiple relationship subjects
([$firing $forgotten_birthday] $mp:"contribute" $mo:"marital_strife")
]
}
RDF is gonna be so awesome when it finally hits the mainstream!I've even written an engine that takes triples and renders web apps.
This is effectively a todo MVC as triples:
var template = {
"predicates": [
"NewTodo leftOf insertButton",
"Todos below insertButton",
"Todos backedBy todos",
"Todos mappedTo todos",
"Todos key .description",
"Todos editable $item.description",
"insertButton on:click insert-new-item",
"insert-new-item 0.pushes {\"description\": \"$item.NewTodo.description\"}",
"insert-new-item 0.pushTo $item.todos",
"NewTodo backedBy NewTodo",
"NewTodo mappedTo editBox",
"NewTodo editable $item.description",
"NewTodo key .description"
],
"widgets": {
"todos": {
"predicates": [
"label hasContent .description"
]
},
"editBox": {
"predicates": [
"NewItemField hasContent .description"
}
}
},
"data": {
"NewTodo": {
"description": "Hello world"
},
"todos": [
{
"description": "todo one"
},
{
"description": "todo two"
},
{
"description": "todo three"
}
]
}
}See https://elaeis.cloud-angle.com/?p=71 and https://github.com/samsquire/additive-guis
I couldn't agree more. I know a lot of this kind of "semantic web" stuff has some pretty vocal detractors and that adoption seems limited, but I still think there is a ton of "meat on this bone". There's just too much potential awesomeness here for this stuff to not be used. I think this is an example of where incremental refinement is the name of the game. As computers get faster, as we get more data, as algorithms improve, etc. we'll get closer and closer to the tipping point where these technologies really start to reveal their potential.
Another example, demonstrating the querying for the relation part, would be to find Leonardo DaVinci's family members: (again in pseudocode so you don't need to dwell in the syntax)
fetch ?kinName, ?linkName given
?link "is called" ?linkName.
?kin "is called" ?kinName.
"Leonardo DaVinci" ?link ?kin.
?link "is" "familial".
Line 3 was the "mindblow" moment for me, you can ask how two objects are related without knowing either one! (though I did know one of them in this example, Leonardo)Ask HN: What novel tools are you using to write web sites/apps? - https://news.ycombinator.com/item?id=26693959 - April 2021 (320 comments)
Ask HN: What startup/technology is on your 'to watch' list? - https://news.ycombinator.com/item?id=25540583 - Dec 2020 (248 comments)
But not in very cold environments. I have one and when there is more than two degrees of frost it struggles.
So for a lot of continental areas they are almost useless since they do not function when you really need them.
For temperate climates and coastal regions they are wonderful.
But heat pumps in old homes just aren't a thing because it's expensive and a lot of work to adapt the house's existing heating. People do understand it, they just don't want to bother with that.
But retrofitting requires a lot of work - replacing radiators with much larger ones, maybe ripping out pipes, and for ground source digging up the garden / street.
> I honestly think part of the reason they are not adopted as much is people can't understand them and don't trust them because of their ignorance.
I think there’s genuine reason right now.
For existing houses it only makes sense if you have really good insulation which rules a lot of people out. That’s why they’re only being focused on new builds, so they can guarantee the insolation is adequate.
They’re expensive (for existing housing) and the returns aren’t quite there yet. Parts and expertise are also quite limited compared to boilers (the market isn’t really there yet).
It also depends what climate you live in for how useful they will be
Some people say that there's nothing new in it, but to my mind, they're missing the point : the Berkeley Four took what couldn't be appropriated for profit, and built a statement about what computing is... They revealed the Wizard of Oz to everyone, so that anyone with some computing background can build a processor, freely.
And now this freed wizard is working his magic, and will change the computing landscape irrevocably.
They could already do that. I designed and laid out in silicon a 32 bit processor as part of my undergraduate studies in computer engineering.
Perhaps it will lead to a processor startup, but follow that to its logical conclusion: it takes a huge, profitable company to sustain processor delivery for years. There's a very good reason why only a handful of companies make the top 6 CPU architectures. There's still Synopsys ARC, Renesas, Atmel, PIC just to name a few of RISC-V competitors.
In reality, the Berkeley Four just made a handful of semi companies richer. WDC, NXP, NVIDIA, Microchip, etc. don't have to pay Arm for IP if they use RISC-V. Did that really help anything? Meh.
Sometime in the last two decades (and again, I'm probably super late to the party on this) it's become extremely affordable to dip one's toe into electronic hardware and embedded software. And not just at the "breadboarding something with an Arduino" level, but at the level of building small production runs of a product that people would actually pay money for.
In a way it reminds me of the mid-2000s era of web technology, where over the course of a few years you went from "putting expensive servers in a data center" to "filling out a web form" in order to host an app in a reasonably high-availability environment.
Or another way of looking at it, a lot of things that maybe you previously had to fund-raise for are now things you can bootstrap, and many things are cheap even by hobby standards.
That means for a lot of projects (for technical folks) you don't need to convince anyone else that your idea has merit, you can just build it and find out.
It could then use IPFS to host "Public Facing" posts. People could pin - or pay for pinning - their posts.
IPFS is what I hope will lead to further democratization of the internet.
Briefly it's a genuine, and scientifically uncontroversial, form of 'cold' fusion enabled by muons — a more massive relative of the electron that was in the news recently thanks to the potentially interesting results coming out of the Muon g-2 experiment at Fermilab [2].
Like conventional 'hot' plasma fusion, in all experiments to date the energy input needed to sustain the process has exceeded the output, but it may be possible to use it to generate power. Unlike conventional fusion though, it receives relatively little attention, and there is no well-funded international effort to tackle the associated technical challenges. As with conventional fusion the technical challenges seem formidable, but it could be an interesting technology if a way could be found to make it work.
Listeners of Lex Fridman's podcast may recall that it was briefly mentioned in the episode he made last year with his father, Alexander Fridman, who is a plasma physicist [3]. As someone who has been interested in the idea for years and barely hears any mention of it, I was pleasantly surprised it came up.
It was also covered on the MinutePhysics YouTube channel in 2018 [4].
[1] https://en.wikipedia.org/wiki/Muon_catalyzed_fusion
[2] https://www.youtube.com/watch?v=O4Ko7NW2yQo
[3] https://www.youtube.com/watch?v=hNCz-8QIWuI
[4] https://www.youtube.com/watch?v=aDfB3gnxRhc
Bonus fact: Muon-Catalyzed fusion was first demonstrated by the Nobel laureate Luis Alvarez, who, with his geologist son Walter Alvarez, later proposed the 'Alvarez hypothesis' for the extinction of the dinosaurs by asteroid impact.
-A Software Engineer
https://www.arpa-e.energy.gov/technologies/projects/conditio...
Are there recent developments in this field that change that?
https://techtransfer.universityofcalifornia.edu/NCD/24852.ht...
Unless we figure out some awesome hardware acceleration for it, it's not practical but for a few niche applications.
It also has the problem that you can use computation results to derive the data, if you have enough control over the computation (e.g. a reporting application that allows aggregate reports).
1 Zero-knowledge proofs,
2 shielded ledgers,
3 democratized and energy efficiency mining,
4 inflationary control, and
5 wallet recovery.
No one has all of these yet, but ZKP is a big part of it.
If we could simulate and observe what happens with complex chemistry accurately, it would completely change the biology, medicine and materials science.
That's probably far less true than you imagine. See Derek Lowe's take on it: https://blogs.sciencemag.org/pipeline/archives/2021/03/19/ai...
The rate-limiting steps in drug discovery is in figuring out a) what you need to muck up to improve health or b) how to muck it up without mucking up other things badly enough to kill you. Computational chemistry has generally focused more on solving problems c) how to muck up this target more effectively or d) how to make the mucker-upper in the first place, which, while not useless, is not going to be a revolutionary change by any stretch of the imagination.
There is the rub. Biological simulations have been written for 40 years now. It's an extremely difficult problem considering how many latent variables are at play, and people have been working on it for a very long time now.
If anyone is working on this, and is looking to hire a computational organic chemist-turned-ai engineer, let me know!
There are Python and and MATLAB bindings, in fact MATLAB 2021a now uses SuiteSparse:GraphBLAS for sparse matrix multiplication (A*B) built-in.
Oh wait, that's under-the-radar technology.
Stripe + TaxJar was cheaper and easier to implement and maintain.
It's being used in new tokomak fusion reactor designs, like SPARC.
https://en.wikipedia.org/wiki/Rare-earth_barium_copper_oxide
More broadly, decentralizing insurance in this way would be very cool too... There's little difference, in my mind, between a prediction market predicting weather changes or elections, and insurance contracts around risk.
... and what's even cooler is: can we build bots and models to actually get an edge on these predictions? Imagine applying HFT strategies from stocks to predicting real-world events... Now it sounds like we can actually get good at forecasting difficult-to-predict human events, rather than just stock prices.
If you’re in the US there is a regulated prediction market set to launch soon.
- jamstack.wtf
- federation-based networks
- CRDTs: https://josephg.com/blog/crdts-are-the-future/
- data-oriented programming paradigm (https://rugpullindex.com/ shameless plug)
- web components: https://docs.ficusjs.org/index.html
e.g. lemmy.ml
It reminds me of a Star Trek tricorder. Imagine having a camera where you can see easily ID greenhouse gases, quantify water/fat content in food, identify plant diseases, verify drug components, identify tumours, and measure blood oxygenation. On the machine vision side of things: it could probably outperform any conventional imaging + DNN combination, and you'd probably get pixel-wise segmentation for free while you're there.
There's been a lot of academic progress going on - it shouldn't be long until hyperspectral imaging makes its way into our lives.
https://news.ycombinator.com/item?id=20985429
https://news.ycombinator.com/item?id=20394166
I think it's inevitable that darklang's vision will be achieved eventually, at least in part, whether by darklang or by other projects. We are already at the stage where you can define your infrastructure in code, and execute functions on managed "serverless" runtimes. It's not too much further to the point that cloud providers will build tightly integrated developer experiences that allow a developer to "just code" while handling all of the complexity that comes after. Within some large software companies, there is something close to this experience, but it hasn't yet been wrapped up and sold to the public.
Wireless VR really is a necessity for it to become more than just a tiny niche.
Not yet mainstream, but it’s actually a joy to use and I think we’ll have significant marginal improvements over time which will keep making it more and more worthwhile.
I think the real thing will be commercial applications of VR, where companies use it because it’s the best way to get certain kinds of work done. And NOT desk work, either. We’re maybe a decade or two from that being mainstream, but it’s going to be a significant improvement.
No more ceramic implants, no more root canals. Grow new shiny and healthy teeth.
I haven't been able to really get into FPGAs but I'm optimistic about them. They're pretty clunky right now but I'm hoping they'll just get easier and more accessible.
If we want computation that we're able to verify as being secure, FPGAs are the closest I see us getting to it. There's also applications in terms of parallel computation that they can do over more traditional approaches.
This might go by the way of Transmeta or always remain niche but it seems like they have a lot of potential.
* Open Source Hardware, specifically relating to robotics
Electronics production is becoming cheaper and easier. Open source hardware has the potential to become as ubiquitous as open source software.
Electronics hardware is still way harder than it needs to be, so the progress is slow, but if we get within range of having an iteration cycle in electronics that's as fast as software development, we'll see spectacular innovation. Robotics especially, as that's a kind of straight forward physical manifestation of electronics that has a potentially large market.
There's a $5k fiber laser that can ablate copper. This could potentially fuel the next round of cottage industry board fab houses (in the US and other non-Chinese countries) and enable rapid turn around time. I wish I could justify the $5k to play around with it but it's just outside of my price range.
* Solar
I'm not really sure if this is 'under-the-radar' but for the first time, solar has become cheaper than coal. This means besides giving a moral incentive for people to use solar, there's now an economic one, which means the transition will most likely be broad.
Coupled with battery technology advances, this could have drastic impacts on the ubiquity and cost of power. I wonder if we'll see a kind of "energy internet" where people will create their own distributed electrified infrastructure.
May I suggest you take a look at microROS[3]?
I am also super excited about OSHWA certified open hardware [4].
[1] https://certification.oshwa.org/ [2] https://www.openrobotics.org/ [3] https://micro.ros.org/ [4] https://certification.oshwa.org/
For a great recent example that get at some of this, see "Does Your Dermatology Classifier Know What It Doesn't Know? Detecting the Long-Tail of Unseen Conditions" - https://arxiv.org/abs/2104.03829
I'm not affiliated with this work but I am building a company in this area (because I'm excited). Company is in my profile.
https://techxplore.com/news/2021-04-rice-intel-optimize-ai-c...
2. VR and AR In 10 years from now when hardware is capable of displaying 16k per eye in a casual lightweight not bigger than regular sunglasses devices. Everyone will be wearing one making all mobilephones obsolete. Every object, animal or human you watch at will be argumented. It will be the greatest new technological impact since the rise of mobile phones. Changing the live of human being so dramatically
2) Reconfigurable computing - The power of the FPGA without the hassle, a homogeneous lattice of computing nodes as small as single bits allows for fault tolerance, almost unlimited scaling, and perfect security. It offers the power of custom silicon without the grief.
3) Magnetic core logic, initially realized when transistors still weren't reliable enough to build computers out of, may be making a comeback for extreme environments, such as that on Venus.
4) Reversible compilation - being able to work from source --> abstract syntax tree --> source in any language (with comments intact) will be a quite powerful way to refactor legacy codebases in relative safety.
5) Rich source / Literate Programming - embedding content in the program instead of having a ton of "resources" helps reduce the cognitive load of programming.
nbIoT is justified if I know that the data volume of my "solution" might increase due to feature/scope creep (and replacing the battery/sensor isn't going to become an annoyance in 2-3 years at end of life).
For most use-case LoRaWAN makes more sense but doesn't have the same marketing budget that is available to T-mobile, Vodafone and Co.
When I did my postgrad research project, back in 2016, I was using LoRaWAN and thought it was so obviously going to be huge in e.g. AgriTech. Surprised not that much has happened with it tbh.
It’s queriable like a database but it doesn’t store your data - it proxies all your other databases and data lakes and stuff, and let’s you join between them.
Trino is a great example.
Aren't stuff like data lakes and warehouses supposed to address the need for a centralized datastore?
Outside of perhaps an easy-to-apply interface, what benefit would a data hub provide over just streaming duplicates from all of your databases into a single data lake like Snowflake?
https://materialize.com/lateral-joins-and-demand-driven-quer...
Started learning Clojure / ClojureScript and keeping an eye on ML languages like ReScript and ReasonML.
I wish soon I'll be able to never write JS/TS code again.
I talk about this more here: https://news.ycombinator.com/item?id=26084187
I think traditional tokamaks are 5-10 years from positive power due to better superconductor tech. There is finally private investment in the space and it has been growing at an absolutely crazy pace.
I think in about 5 years there is going to be a fusion power gold rush.
https://phys.org/news/2021-04-hydrogen-fuel-machine-ultimate...
https://newatlas.com/energy/osu-turro-solar-spectrum-hydroge...
https://uh.edu/news-events/stories/2017/April/05152017Ren-Wa...
And there is currently a very large PR effort by fossil fuel companies to promote hydrogen. I'd suggest extreme skepticism about any "news" promoting it at present. Always ask where the hydrogen is actually coming from in the present, not 30 years down the road.
Example company creating anti-COVID solutions with it: https://www.zengraphene.com/
Both technologies totally redefine what we mean by "sequencing a genome" and open up broad categories of mutations that are completely invisible to more common forms of genotyping or sequencing.
Also anything that is multipurpose. Rope + tarp = shelter, sunshade, awning, hammock, sail, etc. One gadget cooks and chops etc.
Oh and household robots. Already have vacuum mopping and pool robot. Considering a lawn robot. Clothes folding can’t be far off right?
Siri is underrated in my circles, I hardly see anyone use it. Social anxiety of yelling at your phone?
It's a novel "Proof of" algorithm (Proof of Space and Time) that front loads the resource needs into a Plotting phase, with a very efficient Farming phase after that to process blocks with transactions. Seems like a much more fair, sustainable model for having a secure digital currency.
It also has an interesting, Lisp based programming language on it.
But what excites me is that it's lead by Bram Cohen, the dude who invented BitTorrent, one of the best pieces of tech I've used nearly my whole tech life.
If each "bit" of DNA can be either A, C, G, or T, why call that binary?
TLDR: We can now make lenses in any shape we please, not just with parabolas and circles (kinda).
Should have implications for anything that has to do with light: Telescopes, lasers, com-sats, AR/VR, etc.
https://gizmodo.com/a-mexican-physicist-solved-a-2-000-year-...
2) Memristors. We've not found a cheap and stable little component yet like the rest of the 2-lead elements, but it seems to be on the way (says every futurist)
TLDR: We'll be forced to re-do a lot of computer HW as the memristive ones will be (likely) much faster and cheaper on power. Think coin-cell batteries powering very good image recognition systems cheap as a dollar-store watch.
Microsoft and Oculus have hands free controls that actually work. Inside out tracking is progressing quickly. New UX patterns are getting forged.
I'm exited to see what we'll have in a few years time. In my mind its far more exciting than something like crypto but gets much less press.
The protocol offers blockchain-based smart ticketing which eliminates fraud and prevents scalping. This has the potential to get huge when events start coming back post-covid.
If it is for anything other than very LOW power (microwatts), you're going to be disappointed.
It is essentially a beta emitter hooked to a capacitor via some electronics to handle voltage conversion. The thing is the beta source is use it or lose it, and very low power. If you scaled this up to power a Tesla, it would be a nightmare, as it would need to dissipate the full power the car requires, all the time, or it would melt down (aka Fukashima)
For a longer debunk - https://www.youtube.com/watch?v=uzV_uzSTCTM
https://thinkingagriculture.io/the-agriculture-unicorn-hidin...
I think a lot of really cool innovation is going to come out of easily transmissible programmable money.
I'm the only one I know trying to do this. It's changed my life. I'm now applying my ideas in how to choose to lovingly coevolve with partner and the 2.5-year old we conceived. The results from this experiment are getting to the point that people are noticing. There exists a unifying spiritual path through my (mis?)application of category theory in my daily life. I am a noob at hacking the human and I'm saying that after having had major successes within this human body. I also recognize I'm doing way more than many people, so if I'm a noob, so are most-if-not-all people. The Buddha is an example of an elite hacker of the human.
Still waiting for people willing to take the first step, which is cultivating an ideal learning environment within oneself. This means learning to abandon judgments by default.
This will replace Facebook, YouTube, Cloud, Google, Android, everything. (In a millennia or so. )
now.. the race to see if we can fill it with normal stuff instead of letting conspiracy theorists and racists flood it.
Ummm... to start with "what everybody else has already said." If I have anything to add it might be (and somebody might have said this already as well, and I just missed it)
Synthetic Biology - this entire field fascinates me, and I expect big things to come in the future when we can customize DNA and grow items we need, that are tailor made to various parameters. This is also the beginning plot-line to many horror novels and movies though, so "everything isn't rainbows and sunshine" as they say.
Nanotech - related to above, but as with synthetic biology, it fascinates me to think what we can do when we have atomic scale self-assembling machines.
AR/VR - maybe not "under the radar" anymore, but I think there's a ton of untapped potential in this space.
Semantic Web - Yes, I'm still a believer in the idea of RDF / SPARQL / etc. I've said enough about this in the past, so I'm not going to drill any deeper here.
AI - maybe more "AGI" than the ML stuff we have today that gets labeled "AI". And saying that is not an attempt to denigrate ML or any of the radical stuff going today. It's just that for as much as contemporary "AI" can do, I think there's a lot it still can't do, and I like to daydream about the potential of AI's that get closer and closer to (and exceed?) human abilities. See above about horror movie plots though. :-(
Fusion: this has definitely been mentioned already, but add me to the list of people who are hopeful/excited about the prospects.
Time Travel: Actually no. I kinda hope this is impossible. I have a feeling that if unrestricted "Doctor Who" like time travel was possible, causality would collapse and all of reality would just become a big, jumbled mess, incapable of supporting life.
Green mobility concepts like: - self driving shared cars for first and last mile - self driving trains and busses for urban transportation - self driving high-speed trains and self flying airplanes for longer distances
Yes, most of you have heard of it, but I think it is still very underrated.
Especially interesting is the ML libraries that have come out recently and OTP24 whose new JIT compiler gives a ~2x speed improvement to Elixir code, depending on the task.
We've learned to optimize our bodies through nutrition and physical fitness (even if not everyone does it, we have the know how), but our brains are the next frontier.
I've seen lots of snake oil in this space so far, I was going to link to Halo Neuro, but they've been acquired by a tCDS company - from what I understand, the technology isn't ready yet.
We're building a sleep headband that monitors your sleep state, and uses sound to improve your Sleep Performance -https://soundmind.co
Others in this space are Emotiv and Muse, Dreem
We're probably going to see a wave of disruptions from technologies like GPT3.
For example, we might see something like this in the long term:
A) Someone will create a model to accurately convert low level source code to higher level source code that does the same thing when compiled down. Think assembly / machine code to high level code or even English descriptions of the underlying semantics.
B) At this point, why not pipe in some DNA/RNA into the model from A) to get high level insights
C) Give it a couple iterations and it might be possible to create a compiler. For example... C to RNA
D) Finally... solve problems by creating sequences from scratch instead of re-using bits from mother nature
If we ever do get to D), I sure hope no country tries to use this in a terrible way...
Blockchains are secure systems because they're isolated systems. But smart contracts aren't very exciting without data from the real world. Oracles are the bridges to supply data from the real world to the blockchain world.
However, a system is only as strong as its weakest link. You'd want the same security guarantees as what blockchain can provide. So the blockchain needs to "trust" oracles to deliver the correct data that's immune to manipulations.
With the rise of smart contracts and full automations, I think oracles will play a huge role in all of this.
The leading project that's working on this is Chainlink.
It's an API for Payroll. The number of use cases is pretty amazing!
I believe 3D printing will change the world in less than twenty years. We currently in the hobby-est stage–think home computers in the later 70's early eighties. It took a company to see the potential and package it up for anyone to use. I think there will be a breakthrough home 3D printer that will start whole new industries. You will be able to buy physical products direct from anyone. Anyone can design and sell a vase or a bowl or a boomerang because manufacturing and distribution are no longer barriers to market. Think of when music and video moved from mostly only possible with studios to being able to record at a studio to home recording. Anyways... I'm super excited.
Alternative data especially for investment decisions
In the future, geometric algebra will likely be part of everything we do, but it is still very unknown as of now.
I think a lot of people are only allocated one monitor in most industries. This will change the way they work.
It will also change design.
The fact that there isn't more discussion around the cryptography, networking, etc. suggests to me that many are still unfamiliar with the power of the underlying technology.