Transition to ISO 20022 would be a positive improvement, but I don't think it will ever meet the required ROI threshold (globally) for that to happen.
Would be curious if anyone else has further insights.
And if you look at how "modern" ISO 8583 is evolving, almost all changes and extensions are already happening in TLV-like subfields (where a new field unexpectedly appearing doesn't make existing parsers explode spectacularly), and the top-level structure is essentially irrelevant.
Of course, it's a significant hurdle to newcomers to get familiar with that outer layer, but I don't get the sense that catering to these is a particular focus by either network. ISO 8583 is also a great moat (one of many in the industry, really) for existing processors, which have no buy-in to switch to a new standard and the networks need to at least somewhat keep happy.
Because no LLM will be able to replace you for quite a while.
It's the lingua franca of european banks and has been for some time. Back in 2018 when I built a piece of financial software I talked ISO 20022 with a swedish bank in Luxembourg.
On many other payment systems, yes, ISO20022 is or is becoming the lingua franca - e.g. FedWire is going to move next year.
Banks and payment card processing (which is what TFA is about) are basically two different worlds. One switching to a new data interchange format has essentially no consequences to the other.
While I could imagine Visa and Mastercard offering an ISO 20022 interface for new integrations, I’m willing to bet on the majority of volume staying on ISO 8583 for this decade and probably well into the next. They most certainly won’t force anyone to migrate either.
like https://jpos.org/
dear god will I never forget all of these terrible details
What I find baffling is that this "standard" does not specify how to encode the fields or even the message type. Pick anything, binary, ASCII, BCD, EBCDIC. That doesn't work as a standard, every implementation could send you nearly any random set of bytes with no way for the receiver to make sense of them.
Not really: C structs notably don't have variable-length fields, but ISO 8583 very much does.
To add insult to injury, it does not offer a standard way to determine field lengths. This means that in order to ignore a given field, you'll need to be able to parse it (at least at the highest level)!
Even ASN.1, not exactly the easiest format to deal with, is one step up from that (in a TLV structure, you can always skip unknown types by just skipping "length" bytes).
Feast your eyes: C99 introduced an ~~abomination~~ feature called flexible array members (FAM), which allows the last member of a struct to be a variable length array.
If you want to ~~gouge you eyes~~ learn more, see section 6.7.2.1.16 [0].
> To add insult to injury, it does not offer a standard way to determine field lengths
That's awful. You can sort of say the same about variable length struts in C, but at least the strict tupe definition usually has a field that tell you the length of the variable length array at the end.
[0] https://rgambord.github.io/c99-doc/sections/6/7/2/1/index.ht...
Finally free tooling doesn't really exist. The connection to the OSI model also didn't help.
I had made up the "ASN.1X", which includes some additional types such as: key/value list, TRON string, PC string, BCD string, Morse string, reference, out-of-band; and deprecates some types (such as OID-IRI and some of the date/time types; the many different ASCII-based and ISO-2022-based types are kept because a schema might have different uses for them in a SEQUENCE OF or SET OF or a structure with optional fields (even though, if I was designing it from the start, I would have not had many different types like that)), and adds a few further restrictions (e.g. it must be possible to determine the presence or absence of optional fields without looking ahead), as well as some schema types (e.g. OBJECT IDENTIFIER RELATIVE TO). (X.509 does not violate these restrictions, as far as I know.)
I also have idea relating to a new OID arc that will not require registration (there are already some, but this idea has some differences in its working including a better structure with the working of OID); I can make (and had partially made) the document of the initial proposal of how it could work, but it should need to be managed by ITU or ISO. (These are based on timestamps and various kind of other identifiers, that may already be registered at a specific time, even if they are not permanent the OIDs will be permanent due to the timestamps. It also includes some features such as automatic delegation for some types.)
There are different serializations formats of ASN.1 data; I think DER is best and that CER, JER, etc are no good. I also invented a text-based format, which can be converted to DER (it is not really meant to be used in other programs, since it is more complicated than parsing DER, so using a separate program to convert to DER will be better in order to avoid adding such a complexity into programs that do not need them), and I wrote a program that implements that.
I mean, this specific thing sounds like it should have been a fatter but much more unexciting TLV affair instead. But given it’s from 1987, it’d probably have ended up as ASN.1 BER in that case (ETA: ah, and for extensions, it mostly did, what fun), instead of a simpler design like Protobufs or MessagePack/CBOR, so maybe the bitmaps are a blessing in disguise.
[1] https://google.github.io/flatbuffers/flatbuffers_internals.h...
Luckily, there's a bit of everything in the archeological site that is ISO 8583, and field 55 (where EMV chip data goes, and EMV itself is quite ASN.1-heavy, presumably for historical reasons) and some others in fact contain something very close to it :)
We are living in changing times on many fronts and the worlds financial systems is one of those new battlefields. Many are ignorant as to what is occurring but with big tech owning their own payments ecosystems this should be insight for others not aware as we are absolutely certain to see more following their lead. Some of those others following are entire countries, they are just a bigger business after all, as it is already happening for those aware and a small select few are doing i.t.
Stay Healthy!
Interestingly local real (non-digital) currencies like the Brixton Pound [2] and other local paper scrip seem to escape this, which seems a boost for paper technologies.
[0] https://en.wikipedia.org/wiki/Payment_Card_Industry_Data_Sec...
> Interestingly local real (non-digital) currencies like the Brixton Pound [2] and other local paper scrip seem to escape this
And so do countless other digital (non-real?) payment systems across the globe. That's not to say that there aren't any other security regulations, but they're also most certainly not in PCI scope.
Arguably, the original sin of the card payments industry in particular, and US American banking in general, is treating account numbers as bearer tokens, i.e. secret information; if you don't do that, it turns out that a lot of things become much easier when it comes to security. (The industry has successfully transitioned of that way of doing things for card-present payments, but for card-absent, i.e. online, card transactions, the efforts weren't nearly as successful yet.)
- PCI DSS 4.0 is already in place and to be retired on December 31, 2024. PCI DSS 4.0.1 is the replacement and I place already.
- PCI DSS 4.0.1 and game tokens have nothing in common. The applicability of PCI DSS requirements are decided by card brands, aka Visa, Mastercard, etc. And it is the acquirers to enforce on the third party service providers to enforce the standard. Standard itself has no power on anyone.
- Mastercard and Visa have high stakes because technically they are the regulators. EMV Co, the core of the payments was built by Europay (later acquired by Mastercard), Mastercard and Visa. The M and V of it are managing the chip on cards, online payments and much more. PCI SSC is merely a supervisory authority who sets the standard, the process of assessments and investigations on behalf of these brands.
Side note: While the other card brands accept PCI DSS as an entry level requirement, they do not have as much saying on it as Mastercard and Visa.
https://www.antipope.org/charlie/blog-static/fiction/acceler...
Actually based on the timing, this is probably the new better standard that replaced the obscure protocols of the 70s.
The "great" thing about most ISO 8583 implementations I've worked with (all mutually incompatible at the basic syntactic level!) is that they usually freely mix EBCDIC, ASCII, BCD, UTF-8, and hexadecimal encoding across fields.
No idea where you bank, but my Visa notifications are instantaneous, so the network is clearly capable of that. I'm with a modern European bank though, I wouldn't be surprised if the mainframe-lowing US banks that do everything via overnight batch jobs are incapable of this.
With that said, there are places which straight up won't send your transaction to the network at purchasing time. Apple[1] is one notable example. They seem to have a cron job that runs at midnight and does billing for that day. This is really annoying if you're buying an expensive Apple product and increase your card limits for one day only.
I've even seen places that do extremely-low-value transactions "on faith" - the transaction is entirely offline, it gets send to the network the next day, and if it is rejected, your card number goes on a blacklist until you visit the company's office in person and settle the debt.
I wouldn't be surprised if your European bank still relies quite heavily on its mainframes. The mainframe offers high availability, reliability, and, contrary to popular belief, high throughput. Batch processing is just one thing they're good at; real-time processing at high speed is another.
Did you meant Apple Card, Apple Pay or the Apple Store?
Visa/MC transactions go through at least four systems (merchant, merchant payment service provider/acquirer, Visa/MC, issuer processor); Amex is merchant acquirer and card issuer in one, so there is no network and accordingly at most three parties involved (merchant, merchant PSP, Amex).
That said, some of my Visa/MC cards have very snappy notifications too. In principle, the issuer or issuer processor will know about the transaction outcome even before the merchant (they're the one responding, after all), and I definitely have some cards where I get the notification at the same time and sometimes before the POS displays "approved".
Even more magical: when sending money to someone I'm physically present with, I hear their notification before the "money sent" animation finished on my own phone
Must be your bank, because both my Visa and MasterCard ping my phone instantaneously, too.
Well thats sort of the thing...with Visa and MC, there is an extra layer or two of the bank or Fidelity Information Services. With Amex, they own the full stack end to end.
"ISO 8583: The language of both debit and credit cards"
Also don't forget about prepaid cards, charge cards etc., depending on where they exist in your personal ontology of card funding methods ;)
Credit cards, with all their nonsense about cash backs and rewards can be imnplemented in the new payment systems with banks offering line of credit accounts that are part of the appropriate "national payment system", like UPI, PromptPay, Osko/PayID, FedNow etc.
Instant settlement between accounts, low cost fixed price txns etc.
All the big fintech companies have ML running over changes to identify what results in the highest auth rates on a per bin basis.
While for sure there's a lot of signal and value in monitoring auth rates per BIN per payload, flipping bits can be extremely disruptive and counterproductive. From doing the wrong operation to being fined by the schemes, it's a lot of risk for not a lot of gain when these fields can be tuned ad-hoc for the few card issuers that deviate from the standard/norm.
The only language of credit cards is points, cashback, APYs, and hard to read TOS