Around the introduction of MiFID II regulation in 2018, several exchange operators added these frequent auction books.
Cboe's period auctions book is the biggest of these by volume: https://www.cboe.com/europe/equities/trading/periodic_auctio...
In addition to Cboe, Turquoise, Goldman Sachs, UBS, Virtu and Aquis also run frequent batch auction venues: https://www.cboe.com/europe/equities/market_share/market/ven...
I actually co-wrote a paper about this at the time, and it's very rare I get a chance to talk more about it! https://jot.pm-research.com/content/13/3/5 (sadly it's paywalled)
Has that become true in the auction space at well?
Which mean most people here won't be able to read it.
What aren't uploading it to sci-hub or some other free access venue?
I just linked to it in case someone here had a subscription.
I remember this was being seriously considered by one of our target exchanges (can't remember if it was Eurex or Globex).
Our main HFT trader didn't seem worried - he said that the race would just change from a race to pick off an opportunity into a race to align with any auction timeframe.
Back then, our strategies were implemented in FGPA so our response to events could be timed very accurately. Even randomly-timed rolling auctions wouldn't have posed any challenges.
Probably explains why this idea never ended up being implemented by any of the major exchanges.
I've been out of the HFT business for a while, so i guess things have moved on.
All systems have waste, some more and some less. This is unavoidable. So the discussion might be more productive if it was framed like this: Which system provides more benefits with less waste? Would frequent batch auctions lead to less resources being spent on wasteful racing, and more resources performing useful services for other market particpants?
Or if we zoom out even more, the two main purposes of the market is allocating capital efficiently to businesses, and redistributing money from the working population to retired people (401k, IRA). And we can ask which market structure will make it better at these tasks?
Framed like this it becomes natural to look at the other side of equation. Instead of asking which structure would screw over HFT's the most, we can ask which structure would be most convinient for say index funds or other mutural funds. And which structure would be most convinient for the individual stock picker. We could even start to ask which structure would help HFT's provide more liquidity with less risk.
Add to that the fact HFT are profitable and they must therefore provide negative economic value. Either the seller or the buyer is failing to capture value.
Wouldn't markets function better if every participant had a reasonable amount of time to make decisions?
They use linear models and soft cores. Plenty complicated for the kind of arb you could get executing 500ns tick to trade better than the next firm.
I have an underdeveloped idea that what we really need is limit order types with built-in hedging. "Bid to buy 100 gizmos at 30c each, and for every five gizmos bought, immediately offer to sell 1 widget at $1.20; cancel this order if the best offer for widgets moves below that price" sort of thing. Basically, you're moving the simple reasoning that has to be executed at low latency from the market maker's FPGA to the exchange's matching engine.
Sometimes, you can do this by putting orders in spreads, but only where a spread exists (or can be defined) for the two legs you care about, in the right ratio.
You might also want to do more complicated things, like pulling an order in one product if another product moves a lot, because you think that presages a move in the product you're quoting.
The idea would be, firstly, to make it much easier to make markets without having to invest in low-latency infrastructure, broadening the base of participants who can do it, and secondly, to reduce the negative impact of speed-blunting interventions like continuous batch auctions or speed bumps.
The more powerful and general version of this is: "Buy and sell any mix of products, subject to the total package being neutral across these 10 risk factors I care about."
> You might also want to do more complicated things, like pulling an order in one product if another product moves a lot
This is a key problem in US equities or any market with similar fragmentation. The way we're approaching that is to allow those package bids to also include constraints on "current" market conditions at the moment of the auction. A simple one would be "if the momentary spread between asset A and B is greater than X, don't trade."
[0]: https://www.forbes.com/sites/forbestechcouncil/2021/12/30/th...
And everyone having the ability to do so at the same level.
Seems like a good idea to me, assuming contract constraints that guarantee market resolution system will resolve quickly and behave predictably.
And some nano-fees for contract execution to make DNS attacks unprofitable (for the attacker, profitable for the market).
Existing mechanisms which obscure liquidity are iceberg orders, market maker protection [1] [2] [3], and various kinds of non-displayed orders [4] [5] which i confess i am not very familiar with.
I think this illustrates that exchanges are sometimes willing to sacrifice a little transparency in order to encourage more liquidity provision. This is a fundamental axis of market design. At one end are classic lit exchanges, at the other end is OTC dealing, and there are all sorts of shades of grey in between. Which is most appropriate will depend on the specific balance of participants and activity in the market in question.
[1] https://www.eurex.com/ex-en/trade/market-making-and-liquidit... ("Risk protection for Market Makers")
[2] https://www.cmegroup.com/confluence/display/EPICSANDBOX/Mass...
[3] https://www.nasdaq.com/docs/market_maker_protection_model_-_...
[4] https://www.cboe.com/us/equities/trading/offerings/non_displ...
There’s some interesting details about how large trades are done today that could perhaps be better reflected into some element of auction design.
https://www.researchgate.net/publication/24139396_Specialist...
so I think theres a huge design space, and I think it partially turns into a "mechanism design" challenge to articulate a landscape of transaction / market auction mechanisms that
1) incentivize maximizing market liquidity
2) recognize the speed of light is finite, and have that inform the minimal time scale matching can happen on.
3) obviate/remove the need to obscure large trades as a large number of smaller trades (which is half the value of so called algorithmic trading strategies to institutional investors). This could be via having one design constraint on auctions be that the market impact of the sum of the small trades should be equivalent to the single large trade. (ignoring the issue of the exogenous information of there was a large trade ).
some interesting knock on consequences of these ideas are the following
1) the larger the time scale you're willing wait for the trade to be matched to "the other side", the cheaper it should be to trade! (creating liquidity is valuable!)
2) if you're willing to allow your trade to be "partially matched" instead of all or nothing, that too creates liquidity.
the point being, you start with "what are all the complications of how people do large/complicated trades today that should just be trivial with the right auction" is sortah my perspective. thats glossing over a lot of complexity and other concerns, but those are some high notes.
that said,this is just the tip of the iceberg, and these sort of market design questions are genuinely under studied in my mind, and i could easily spend hours talking about this in greater depth over coffee or such.
Put differently, I think what is going to happen is you will start stacking way more orders at each interval than you can process before the next because the wonderful CPU pipelining effects get wrecked each time you hit an arbitrary time slice boundary. I suppose you could intentionally spin the CPU instead of yielding back to the OS during these delays, but that means you are not able to process any orders that are currently arriving, so your tail ends up growing longer and longer.
Batched auctions require different algorithms, sure. They may even be more expensive to execute. I suppose you have to sort the batch once instead of sorting as you go. Maybe that makes it O(n log n) instead of O(n)? Can you keep a traditional order book up-to-date in O(1) per transaction? Either way, seems like this should be a non-issue. Even if exchanges need to add more shards for order processing, that's just not a big deal.
For example, while other markets and the real world moves on, you gain info. So the later in the batch you can submit a trade, the greater your advantage.
Technical/practitioner notes:
1. Gaming is a bit of an overloaded term, and in this context, implies that agents are doing something wrong. Mechanism design assumes that agents will respond rationally and strategically to the mechanism they're presented with, so whatever happens is on the designer. Ideally, the mechanism chosen will result in an individual response that collectively optimize the designer's objective function, e.g., maximizing social welfare or the auctioneer revenues. Suppose the mechanism chosen isn't the "best" one for a specific set of agents & goods. In that case, agents might have individually rational behaviors that result in sub-optimal outcomes relative to what was achievable with another mechanism.
2. Prop 6 isn't entirely predicated on having a random call time (there will be competition over price as long as there are two "fast" types in the market). However, randomizing auction call times is still practically speaking useful.
Open and closing auctions are batch auctions.
To facilitate liquidity seeking, the major exchanges publish a periodic "imbalance" feeds in the minutes and seconds up to the auction. As you can imagine, its gamed in a multitude of ways, including special order types (D-Quotes on the NYSE anyone?) that only a select few market participants know about or have access to.
The point is, whatever mechanism you choose: over time that mechanism morphs and transforms. New options are provided under the guise of liquidity seeking but really serve to benefit the HFTs.
How does a priority queue work?
If the trades are priority queued, then you have just recreated the speed "arms race" that this idea hopes to eliminate.
The High-Frequency Trading Arms Race: Frequent Batch Auctions as a Solution [pdf] - https://news.ycombinator.com/item?id=20003222 - May 2019 (4 comments)
(Vickery Auctions are pretty much dead now because websites saw that bidders were bidding $X and automatically assumed that because they weren't getting $X, but rather $(X - Y), they were being ripped off)
Have you ever wondered how is it possible that when you buy some shares of SPY someone out there is somehow able to collect 500 securities to fulfil your order? Even if it's not a literal action-reaction, that is what must be happening at the margin.
Also, if you still don't believe me, then try to find out how to unpack X amount of shares of SPY (for some non-small amount of X) into individual securities. Can you do it yourself, for example? Whom to call, where's the button for that, who can do it?