It was really hard to build asynchronous code until now. You had to clone objects used within futures. You had to chain asynchronous calls together. You had to bend over backwards to support conditional returns. Error messages weren't very explanatory. You had limited access to documentation and tutorials to figure everything out. It was a process of walking over hot coals before becoming productive with asynchronous Rust.
Now, the story is different. Further a few heroes of the community are actively writing more educational materials to make it even easier for newcomers to become productive with async programming much faster than it took others.
Refactoring legacy asynchronous code to async-await syntax offers improved readability, maintainability, functionality, and performance. It's totally worth the effort. Do your due diligence in advance, though, and ensure that your work is eligible for refactoring. Niko wasn't kidding about this being a minimum viable product.
I was trying to refactor one of my Rust project the other day and almost immediately got hit by the "No async fn in traits" and the "No lifetime on Associated Type" truck. Then few days later, comes this article: https://news.ycombinator.com/item?id=21367691.
If GAT can resolve those two problems, then I guess I'll just add that to my wish list. Hope the Rust team keep up the awesome good work :)
Async/await lets you write non-blocking, single-threaded but highly interweaved firmware/apps in allocation-free, single-threaded environments (bare-metal programming without an OS). The abstractions around stack snapshots allow seamless coroutines and I believe will make rust pretty much the easiest low-level platform to develop for.
Céu is the more recent one of the two and is a research language that was designed with embedded systems in mind, with the PhD theses to show for it [2][3].
I wish other languages would adopt ideas from Céu. I have a feeling that if there was a language that supports both kinds of concurrency and allows for the GALS approach (globally asynchronous (meaning threads in this context), locally synchronous) you would have something really powerful on your hands.
EDIT: Er... sorry, this may have been a bit of an inappropriate comment, shifting the focus away from the Rust celebration. I'm really happy for Rust for finally landing this! (but could you pretty please start experimenting with synchronous concurrency too? ;) )
[1] https://en.wikipedia.org/wiki/Esterel
[2] http://ceu-lang.org/chico/ceu_phd.pdf
[3] http://sunsite.informatik.rwth-aachen.de/Publications/AIB/20...
I think if it's interesting and spurs useful conversation, it's appropriate, tangent or not. I for one an thankful for your suggested links, they look interesting.
Ceu looks very neat, I suspect (having not read much about it yet) that async codebases could take a lot of inspiration from it already.
I'll open a thread on users.rust-lang.com to discuss the similarities/differences with Céu, but for now, here's the main similarity I see:
A Céu "trail" sounds a lot like an `async fn` in Rust. Within these functions, `.await` represents an explicit sync-point, i.e., the function cedes the runtime thread to the concurrency-runtime so that another `async fn` may be scheduled.
(The concurrency-runtime is a combination of two objects, an `Executor` and a `Reactor`, that must meet certain requirements but are not provided by the language or standard library.)
This can let you avoid a lot of the pomp and circumstance of writing state machines by hand.
I'm still wondering if lots of the problems can be solved with an "allocate only on startup" instead of a "never allocate" strategy, or whether full dynamical allocation is required. Probably needs some real world apps to find out.
It's for cases where alternatives don't exist or they are too expensive.
Language level green threads are safer abstraction over asynchronous I/O operations.
This stuff is easily available for C. It's just a function call instead of syntactic sugar. On Windows, use Fibers. On Linux, there is a (deprecated) API as well (forget the name). Or use a portable wrapper library.
Or just write a little assembly. Can't be hard.
Async I/O gives awesome performance, but further abstractions would make it easier and less risky to use. Designing everything around the fact that a program uses async I/O, including things that have nothing to do with I/O, is crazy.
Programming languages have the power to implement concurrency patterns that offer the same kind of performances, without the hassle.
I know Rust is all about zero-cost abstractions, "but at what cost" beyond just runtime cost? I appreciate their principled approach to mechanical sympathy and interop with the C abstract machine, but I'm just not enthused about this particular tradeoff.
An alternative design would have kept await, but supported async not as a function-modifier, but as an expression modifier. Unfortunately, as the async compiler transform is a static property of a function, this would break separate compilation. That said, I have to wonder if the continuously maturing specialization machinery in Rustc could have been used to address this. IIUC, they already have generic code that is compiled as an AST and specialized to machine code upon use, then memoized for future use. They could specialize functions by their async usage and whether or not any higher-order arguments are themselves async. It might balloon worst-case compilation time by ~2X for async/non-async uses (more for HOF), but that's probably better than ballooning user-written code by 2X.
For instance, Go does not give you the option to handle this yourself: you cannot write a library for implementing goroutines or a scheduler, since it's embed in every program. That's why it's called runtime. In Rust, every bit of async implementation (futures, schedulers, etc) is a library, with some language support for easing type inference and declarations. This should already tell you why they took this approach.
Regarding async/await and function colors (from the article you posted), I would much rather prefer Rust to use an effect system for implementing this. However, since effects are still much into research and there is no major language which is pushing on this direction (maybe OCaml in a few years?) it seems like a long shot for now.
So you have a synchronous http server and you decide you want to make it async? Ok, no problem, switch to an async-enabled request handler in main, and boom everything that you wrote is recompiled into a state machine a la async, and at the very bottom where it actually calls the library to write out to the socket it, the library knows what the context is and can choose the async-enabled implementation.
I'm glossing over some important details and restrictions that might make this more complicated in practice, but I think it should at least be possible for functions to opt-in to 'async-specializability' to avoid having to rewrite the world for async.
How is that different than an `async` block?
Can you give one that reaches this goal? Go is often cited on that regard but it doesn't really fit your description since it trades performance for convinience (interactions with native libraries are really slow because of that) and still doesn't solve all problems since hot loops can block a whole OS thread, slowing down unrelated goroutines. (There's some work in progress to make the scheduler able to preempt tigh loops, though).
https://cr.openjdk.java.net/~rpressler/loom/Loom-Proposal.ht...
Of course Python generally fails to offer "the same kind of performance" for anything limited by CPU or memory, so it technically doesn't fit the description either.
Asking as a beginner, what does the above mean?
Not sure what does hot loop means, and why does it block Os thread
Can you explain a bit? What is the connection between concurrency implementation (which I am assuming you are talking about multiplexing multiple coroutines over the same OS thread) and say slowness in cgo? Having to save the stack? I don't get it.
First-class continuations. They are an efficient building block for everything from calling asynchronous I/O routines without turning your code into callback spaghetti, to implementing coroutines and green threads. Goroutines are a special and poorly implemented case of continuations. Gambit-C has had preemptive green threads with priorities and mailboxes at less than 1KiB per thread memory overhead for over a decade now, all built on top of first-class continuations.
If you use a work stealing executor tasks will get executed on another thread. Therefore the impact of accidentally blocking is lowered. Tokio implements such an executor
Microsoft kind of tried to do this with the new APIs for UWP: pretty much everything is async, the blocking versions of APIs were all eliminated, so there was no way for the async-ness to "infect" otherwise synchronous code. It was actually a pretty nice way to program; it's a shame it never took off.
They also lowered their portion of the revenue share considerably for Store apps, afaik.
So until there's some standardized form of seamless coroutines on OS level, that is sufficiently portable to be in wide use, we won't see widespread adoption of them outside of autarkic languages and ecosystems like Erlang or Go.
I think the best you could do would be heuristics - having inferred or user-supplied bounds on the complexity of functions, having rough ideas on how disk or network latency will affect the performance of functions, and bubbling that information up the call tree. It wouldn't be perfect, but it could be useful.
1. A performant "i/o" layer in the standard library that allows a large number of concurrent activity (forget thread vs coroutine differences).
2. Ability of programmer to express concurrency. Ideally, this has nothing to do with "I/O". If I am doing two calculations and both can run simultaneously, I should be able to represent that. Similarly for wait/join.
Explicitly marking a thread as async-only will just force everyone else (who need sync and cannot track/return a promise/callback to their caller) write a wrapper around it for no reason.
Besides, you don't have to put async/await everywhere: if your code is not performing IO, it completely ignore this concern.
The problem is that most of your code is mixing I/O and non I/O code, and people just don't think about it. E.G: a django website is not just a web server, but has also plenty of calls to the session store, the cache backend, the ORM, etc.
Now you could argue that the compiler/interpreter is supposed to hide the sync/async choice to the code user. Unfortunately, this hides where the concurrency happens, and things have dependencies on each others. Some are exclusive, some must follow each others, some can be parallel but must all finish together at some points, some access concurrent resources...
You must have control over all that, and for that to happen, you can either:
- have guard code around each place you expect concurrency. This is what we do with threads, and it sucks. Locking is hard, and you always miss some race condition because it can switch anywhere. - have implicit but official switch points and silos you must know by heart. This is what gevent does. It's great for small systems, not so much at scale. - have explicit switch points and silos: async/await, promises, go-routine. This is tedious to write, but the dangerous spots are very clear and it forces you to think about concurrency upfront.
The last one is the least worse system we managed to write.
Welcome to the 1980s world of cooperative multitasking, but now with "multi-colored functions."
I don't know, maybe there are valid applications for await (such as much frequented web servers, where you might want to have 10s of thousands of connections, that would be too expensive to model as regular threads, but still you just want to get some cheap persistence of state and it's not a big problem that the state is lost on server reboot). I can't say, I'm not in web dev.
But I bet it's much more common that await is simply a little overhyped and often one of the other options (real threads or explicit state data structures) is actually a better choice.
Well... I can't help but whenever I see the await stuff it reminds me of times where I had to do cooperative multitasking and was longing for OS and/or CPU support for something which is non-invasive to my algorithms. But then I'm not sure whether I'm the grumpy old man or it is just history repeating.
The issue it solves is programmer having trouble executing parallel code in their head, and when relationship became intricate (a computation graph) they just breakdown and write buggy software.
A scheduler is targeted at use cases. A preemptive scheduler optimize for latency and fairness and would apply for real-time (say live audio/video work or games) but for most other use cases you want to optimize for throughput.
With real thread you risk oversubscription or you risk not getting enough work hence the task abstractions and a runtime that multiplex and schedule all those tasks on OS threads. Explicit state data structure is the bane of multithreading, you want to avoid state, it creates contention point, requires synchronization primitives, is hard to test. The beauty of futures is that you create an implicit state machine without a state.
Maybe I just can't imagine it. Whats a good language that shows off the style you're suggesting?
What you describe sounds like native UI work since forever before javascript. "Don't block the main thread" and all that.
Javascript is diferent in that it's a single-thread with an event loop. Synchronous functions execute until they end. Asynchronous functions are handled by the event loop which "loops" between the pool and runs each one for some time, then switches to other, concurrenly (think round robin). What happens when the runtime is running an asynchronous function and inside it reaches a synchronous one? it stops round-robin and executes this function until it ends.
What OP wants is a language like javascript but without having to write code distinguishing synchronous and asynchronous functions and instead having some other tool to tell the runtime when a function is synchronous or asynchronous without having to write it again.
Still, in practice it's just not that hard, use async methods if you're writing async code.
No, it doesn't really. 'Async' is a strictly Python problem, due to the insanity of the GIL. Predictably, the Python solution to it is also insane.
Why you have to turn a sane language like Rust into an insane one by cargo-culting a solution to a non-problem is a mystery to me.
Oh well, good thing at least C++ hasn't dropped the ball.
> "While it is theoretically not an insurmountable challenge , it might be a major re-engineering effort to the front-end structure and experts on two compiler front ends (Clang, EDG) have indicated that this is not a practical approach."
So the short answer seems to be that due to technical debt on the part of existing C++ compilers and their "straight" pipeline, the front-end cannot anticipate the size necessary for some book-keeping information traditionally handled by the code-generator. I'll take Richard Smith's word on Clang.
- Asynchronous programming is completely orthogonal to Python
- If you don't need it, don't use it. Rust without asynchronous functions still looks and works like before
- By the way: welcome to C++20's coroutines
The difference between synchronous code and async code implemented as libraries is that async code involves jumping in and out of functions a lot, while employing runtime library code in between. A piece of code that is conceptually straightforward, may, in the async case, involve multiple returns and restores. In the sync case it doesn't need to do that, since it just blocks the thread and does the processing in other threads and in kernel land.
Rust's async/await support makes it possible to write code that is structurally "straightforward" in a similar way than synchronous code would be. That allows the borrow checker to reason about it in a similar way it would reason about sync code.
I'd love for Rust to eventually get a Move trait (something like C=+ move constructors) to resolve this. Besides some complexity in designing it, there is resistance from some corners about having anything execute on a move.
This is the challenge and async await lets you make this kind of self referential types without unsafe code.
If you want to defer execution of a promise until you await it, you can always do that, but this paradigm forces you to do that. The problem is then, how do I do parallel execution of asynchronous tasks?
In JavaScript I could do
const results = await Promise.all([
asyncTaskA(),
asyncTaskB(),
asyncTaskC()
]);
and those will execute simultaneously and await all results.And that's me deferring execution to the point that I'd like to await it, but in JavaScript you could additionally do
const results = await Promise.all([
alreadyExecutingPromiseA,
alreadyExecutingPromiseB,
alreadyExecutingPromiseC
]);
Where I pass in the actual promises which have returned from having called the functions at some point previously.So how is parallel execution handled in Rust?
Begin execution where?
If every future started executing immediately on a global event loop, that event loop would need to allocate space for every future on the heap. A heap allocation for every future is exactly the sort of overhead that Rust is trying so carefully to avoid. With Rust futures, you can have a large call tree of async functions calling other async functions. Each one of those returns a future, and those futures get cobbled together by the compiler into a single giant future of a statically known size. Once that state machine object is assembled, you can make a single heap allocation to put it on your event loop. Or if you're going to block the current thread waiting on it, depending on what runtime library you're using, you might even get away with zero heap allocations.
This sort of thing is also why Rust's futures are poll-based, rather than using callbacks. Callbacks would force everything to be heap allocated, and would work poorly with lifetimes and ownership in general.
These issues are not an issue in JavaScript and other languages since objects are individually allocated on the heap there anyway
futures::join!(asyncTaskA(), asyncTaskB()).await
See the join macro of futures[0]. The way it works is, it will create a future that, when polled, will call the underlying poll function of all three futures, saving the eventual result into a tuple.This will allow making progress on all three futures at the same time.
Without a yield instruction its strange to ask "how do I start all these futures before I await" and join does make sense because it does both of those things. But other languages can start futures, yield, reenter, start more futures, and wait for them all while making progress in the mean time.
I'm curious what the plan in there.
In JS, for example, the `bluebird` library is a third party utility for managing execution of functions. You can do things like
const results = await Promise.map(users, user => saveUserToDBAsync(user), { concurrency: 5});
And I pass in thousands of users, and can specify `concurrency: 5` to know that it will be execute no more than 5 simultaneously.Implementation of this behavior in user space is trivial in JS, is it possible in Rust?
The wording is a bit imprecise; you can't 'await' something to invoke it, exactly. It also won't begin execution until .await is called. What happens is, at some point, you have a future that represents the whole computation, and you pass it to an executor. That's when execution starts.
There's a join macro/function that works the same way as Promise.all, and is what you'd use for parallelism.
For instance, I use the CurrentThread runtime in tokio, because I'm using the rust code as a plugin to Postgres, and it accesses non-thread-safe APIs.
What you are asking for is essentially for the futures runtime to be hidden from you. That's fine for some languages that already have a big runtime and don't need the flexibility to do things differently, but doesn't work for rust.
If they are, then they're still not referentially transparent. But if they aren't then it might be a bit of a surprise to developers coming from other languages (especially ones not familiar with something like an IO monad).
Async is about interleaving computations on a single thread.
* First, it runs the callee synchronously until the first await, which can fire off network requests, etc.
* Second, continuations are pushed onto queues immediately- the microtask queue that runs when the current event handler returns, for example.
Rust does neither of these things:
* Calling an async function constructs its stack frame without running any of its body.
* Continuations are not managed individually; instead an entire stack of async function frames (aka Futures) is scheduled as a unit (aka a "task").
So if you just write async functions and await them, things behave much more like thread APIs- futures start running when you pass a top-level future to a "spawn" API, and futures run concurrently when you combine them with "join"/"select"/etc APIs.
Think of it this way. I have 3 letters I need to send, and I'm expecting replies for each. A single threaded, synchronous language, would basically send the first letter, wait for the reply, send the second letter, wait for the reply, then send the third and wait for the reply. In JS, you're still single threaded, but you just recognize that there is no point in sitting around and waiting before moving onto the next item. So you send the first letter, and when it would be time to wait, you continue executing code, so you immediately send the next letter, and then finally send the 3rd.
How they're scheduled simultaneously on a single thread is exactly what makes JS so fast for IO. Once it starts making an http call, db call, disk read, etc, it will release the thread to begin execution of the next item in the event loop (which is the structure JS uses under the hood to schedule tasks).
So what really happens is when we call
asyncA();
asyncB();
JS will go into `asyncA`, run the code, and at some point it will get to a line that does something like "write this value to the database." This will be an asynchronous behavior, that it knows will be handled with a callback or a Promise, so it will immediately continue execution of the code. So now it pops out of executing `asyncA` and executes `asyncB`, meanwhile the call to the DB has gone out and we don't care if it has finished, we'll await both of these when we need them.More info: https://dev.to/gajus/handling-unhandled-promise-rejections-i...
The reason Rust does this is because its priorities are different than these other languages. Rust is built around zero-cost abstractions because it is intended to be as fast as possible while still safe. That's one of many reasons why Rust is considered a systems language and Javascript is not. They're for different things.
For more on how async in Rust works, I'd invite you to read the actual manual on the subject (linked from the announcement): https://rust-lang.github.io/async-book/06_multiple_futures/0...
In a language like Haskell where everything in lazy, it's not uncommon for someone to model their logic in such a way that computations that are not necessary are never run but appear to be used in code anyway.
Depending on what you want to do, it's also possible to start threads/green-threads (using something like Tokio), and use message passing for async tasks where you do not need to process the result synchronously.
I found that most code that actually use create_task tend to be super complex and often bug-ridden since eventually you will have to await it anyway, and it is easy to miss this, which will leave errors unhandled, especially propagating cancel() to all these executing futures floating in air.
Both will execute at the same time and you'll get a tuple of results.
Also, this has all the goodness: open-source, high quality engineering, design in open, large contributors to a complex piece of software. Truly inspiring!
This is a foundational implementation and while you _can_ use it, you are also likely to run into a host of partially implemented support problems. No fault of anyone, just a lot left to do. Examples being, you may run into needing async FS ops, so you bring in one of those libs. You may need async traits, so you bring in macro helper libs, you may need async DB interaction, so you check your db lib and hope they have support. Etcetcetc.
None of those are problems to stop you from using Async if you want. They're merely reasons you may not want to use Async quite yet.
Personally I've found Rust's development cycle to be such a joy, including manual threading, that I can get by happily without Async for the time being.
But, I'm not currently doing a lot of work where I'm dying to use greenthreads-like paradigms. Threading works fine in my web frameworks, db pooling, parallel libs like Rayon, etc. So because I've got all the power in Rust I need currently I just have no reason to use Async.
WITH THAT SAID.. use it if you want it! I just might not recommend it to people coming into Rust unless they explicitly need Async. It's bound to introduce a few - hopefully mild - headaches, currently.
There are crates that provide ready-made ones, and that will work for almost all cases, but it's another dependency that you have to evaluate and stay on top of.
It is entirely possible to do yourself, though. Last month, I dove into the details during a game jam. Not much of a game came out, but I did manage to get a useful async system up and running from scratch:
https://github.com/e2-71828/ld45/blob/master/doc.pdf (cf. Chapter 3)
Rust is the first language that is both truly good as a low-level systems language and also truly good as a high-level modern labguage at the same time.
To me, it's amazing to finally have another option aside from just C/C++.
Technically, I might have had some other options before, but rust gets it right.
I'm pretty sure the state of affairs for async programming is still a bit "different" in Rust land. Don't you need to spawn async tasks into an executor, etc.?
Coming from JavaScript, the built in event-loop handles all of that. In Rust, the "event loop" so to speak is typically a third party library/package, not something provided by the language/standard itself.
There are some differences, yes.
> Don't you need to spawn async tasks into an executor, etc.?
Correct, though many executors have added attributes you can tack onto main that do this for you via macro magic, so it'll feel a bit closer to JS.
https://github.com/prisma/prisma-engine/
Been working with the ecosystem since the first version of futures some years ago, and I must say how things are right now it's definitely much much easier.
There are still optimizations to be made, but IO starts to be in a good shape!
Basically we offer a code generator for typescript, migrations and a query engine to simplify data workflows. Go support is coming next.
As a rust noob, small question based on the example given: Why does `another_function` have to be defined with `async fn`? Naively, I would expect that because it calls `future.await` on its own async call, that from the "outside" it doesn't seem like an async function at all. Or do you have to tag any function as async if it calls an async function, whether or not it returns a future?
It's kind of like how if you declare a Python function containing the yield keyword, then the function returns an iterable rather than simply executing top to bottom.
In the same way that Python yield only makes sense from within an iterable, Rust's await keyword only makes sense inside of a Future. Outside of a Future, there'd be no concept of a yield. This is why the "outer" function must be declared async.
Now would you tag something as async if not required? Likely not - it just makes things more complicated. One exception is when you expect you need to modify the body of the function in the Future to make use of await, and you want to maintain compatibility
Note that marking a function as async is only syntactic sugar for a function that returns a future.
However, Rust Futures have a very different implementation compared to CompletableFuture.
I was wondering if a more pleasant approach would be to add a 'defer' keyword to return a future from an async call, and have the default call be to await and return the result (setting up a default scheduler if necessary). Requiring the await keyword to be inserted in the majority of locations seems poor UX, as is requiring callsites to all be updated when you update your synchronous API to async.
Excited for where this takes us! Can't wait for tokio 2.0 now.
Maybe the primary benefit is that its new and sexy
If you are talking about async/await:
- for concurrency it has been in the standard library for a couple of years. Also you can implement it as a library without compiler support.
- for parallelism, Rust is in advance. Nim has a simple threadpool with async/await (spawn/^), it works but it needs a revamp as there is no load balancing at all.
You can also fallback on the raw pthreads/windows fibers and/or OpenMp for your needs or even OpenCL and Cuda.
Regarding the revamp you can follow the very detailed Picasso RFC at https://github.com/nim-lang/RFCs/issues/160 and the repo I'm currently building the runtime at https://github.com/mratsim/weave.
Obviously I am biaised as a Nim dev that uses Nim for both work and hobby so I'd rather have others that tried both comment on their experience.
I like coroutines but thats mostly because they are not threads, they only switch execution on yield, and that makes them easy to reason about :)
idk about Lua, but afaik python's async is pretty much implemented on top of coroutines ("generators"). `await` is basically `yield`
> I like coroutines but thats mostly because they are not threads, they only switch execution on yield, and that makes them easy to reason about :)
pretty sure i've heard the same thing said about async io!
And that would leave the motivation to be performance gains by being able to reduce the amount of threads I guess
(And that it is cool of course :)
I tried out Rust for a typical server-side app over a year ago (JSON API and PostgreSQL backend), and the lack of async-await was the main reason I switched back to Typescript afterwards, even though Diesel is probably the best ORM I've ever worked with. Time to give it a try again.
I heard it uses locks?
This is not on the same memory right? https://news.ycombinator.com/item?id=21469295
If you want to do joint (on the same memory) parallel HTTP with Java I have a stable solution for you: https://github.com/tinspin/rupy
If you want two threads running in parallel to concurrently access the same memory location you don't need synchronization if you only perform reads, and you need one if there is at least one write. Like in any other language (this comes directly from how CPU works).
The good thing with Rust is that you can't shoot yourself in the foot: if you can't accidentally have an unsynchronize mutable variable accessible from two threads: the compiler will show you an error (unless you explicitely opt out this security by using unsafe primitives, in which case the borrow checker will let you go).
The solutions range from mutexes to copy-on-write and more.
So it can operate on "the same memory", and there are a whole lot of ways to manage it safely. The right tool for the right job, really.
There are ways to perform parallel work that is "safe", Rust solves this with the borrow/checker implementation, another clear "safe" way would be to make everything immutable as seen on Haskell, Ocaml, F# to where who cares about who gets to what first if the underlying thing will never change.
Mutexes and locks and all the other ways of doing parallel work that is "safe" isn't a primitive thats cooked in with the language.
I need a language with minimal runtime, good support for the C ABI and structure formats, decent macros, good ecosystem and community for mainstream programming needs, and support for async.
Where else can I find that whole package?
Go has too much of a runtime, GC, and doesn't support C structure formats very well (i.e. best case you need to copy/translate them to Go; you can't operate on them directly).
C++ can do it, but the details are ugly and the result unsatisfying.
Ada lacks macros and a mainstream ecosystem.
So... what do I use if not rust?
I can't imagine doing that project it any other language.
a bit too excited. GP isn't wrong, Go pretty much has the same thing and I have never seen so much fanboyism for a single feature ever in my career.
I don't get it, but that might be because I am a manager.