https://news.ycombinator.com/item?id=13139638 (2016), https://news.ycombinator.com/item?id=9799007 (2015)
Both threads generated a lot of really interesting discussion, and I was curious what the discussion would sound like if this were asked again in 2018.
I want to write code and deploy, not spend most of my time on ops or marry myself to a cloud vendor. I started my company as the only tech person and I feel like I have to be more admin than dev even though I'm doing the same thing as everyone else.
You are describing a large-scale setup boilerplate, far away from what I'd call a startup in a box. Slap in a simple server like my own project [1] or no server at all until product-market fit, and then but only then start scaling.
I've experienced many of those tools in the context of a 200-person company; I can definitely see how managed versions of them could deliver value to a much smaller org.
The Application Runtime (CFAR) is like an OSS Heroku, installable on AWS, GCP, Azure, vSphere, and OpenStack. Then just push your app and scale it out. It was first built as an enterprise Heroku competitor, and the commercial flavors power a bunch of big companies’ infrastructures. It even uses the same “buildpack” model that Heroku uses to provide language and framework dependencies.
The Container Runtime (CFCR) packages k8s in a way to make it easier to deploy, maintain, and scale, and it also deploys and manages the health of the underlying VMs. It originated in collaboration with Google.
Source: I’m a Cloud Foundry Foundation project lead.
I'm assuming you want the product to be built with open source components, so then the startup would be selling a glue script to package these open components together.
The only feasible way to make money with this would be to sell a proprietary glue script. Not sure how acceptable that would be to you or to potential customer base of this startup.
Robin enables an App-store like user experience to simplify deployment and lifecycle management of Big Data, NoSQL and Database deployments, On-Premise and in-Cloud. It supports application-level snapshots, clones, time-travel, QoS, scaling, backup/restore, etc. It takes care of HA, stable hostnames, Application templates, events, notifications.
Check out the demo clips for [video]
Cloudera https://vimeo.com/213037162/a66e0b4e77 HortonWorks https://vimeo.com/227832200/8d9c749984 MongoDB https://vimeo.com/206348836/a36e535add ELK Stack https://vimeo.com/225899966/6000bae6f2 Controlling IOPs in shared environment https://vimeo.com/171608156 1-Click deploy, snapshot, the clone of entire Oracle https://vimeo.com/195549219 Even RAC cluster https://vimeo.com/223730427/e8cb8c92f8 and SAP HANA many more applications
Disclosure: I am the lead developer at Robinsystems
The point is to find problems to solve not just ideas.
I did the original posting of that question and wrote an essay about it afterwards:
One problem is that it requires a minimum of 3 instances, so it can be more expensive than Heroku to start.
If find PaaS can be a little too opinionated. You get boxed-in too quickly. Whereas IaaS is too raw. I had a bit of a head scratch when I recently moved to Google Cloud and was a bit flummoxed on where to put my (backend) session state. Similar experience trying to get Blue-Green deployments on GKE (Kubernetes).
I feel GCP/AWS and friends will get there. But for now there are definitely some gaps.
But ultimately all these arguments converge to: something that beats k8s's suckiness is a potential startup.
0 - https://docs.openshift.com/enterprise/3.1/install_config/agg...
It is hard to compile a stack - which would be useful for most companies.
Yes, there have been attempts made - for Example, Open Stack. But then I feel, there is always an element of specific technology based on the needs of the product being built.
I've found this is only kinda true. Most people need the same supporting infrastructure and most people only need to deploy their HTTP-visible app, maybe a cron or backend daemon, and probably a data store. Everything else from logs to metrics to alerting to HA gateways and so on are quite universal.
Which warm intros should I ask for? There are ~1,000 companies in my ideal customer profile. I've got ~500 friends who I'd feel comfortable asking for warm intros, and say these friends each have ~500 friends. After deduplicating, that's ~100,000 second-degree connections, some of whom are decision-makers at companies I'd like to sell to.
I'd want someone to go through my LinkedIn/Facebook/Instagram/Twitter/etc., and tell me something along the lines of: "Ben might know decision-makers at Companies A, B, C, D, and E." And, conversely, I'd like to know all the possible warm introductions that could lead me to Company A (e.g. "Ben, Max, and Jennifer could possibly introduce you to Alice, Bob, and Cameron at Company A").
All of this information is available to me; it's just a total O(N^2) pain to clean and aggregate it. Like, I can certainly spend an hour listening to podcasts and looking through Ben's LinkedIn connections, Facebook friends, Instagram followers - and seeing if any of them are COOs at CPG brands. But I'll run out of podcasts eventually, and then it's not a very high-leverage use of my time to repeat that process for Max, Jennifer, Nate, Christy, et al.
The real issue is how custom these solutions need to be and which weird rules and workflows some need to follow because of internal rules, iso norms and whatever else.
I don't think a one fits kinda all solution would ever really work. However i highly believe that companies need to allocate enough time to build tools that work _for them_
If someone could create a product (probably infrastructure plus a Python IDE) which made doing things the "right way" easy for these users, and which would provide case studies or tutorials to show them WHY doing things correctly is beneficial using analogies to good lab behavior, it would be hugely valuable.
The IT department allocated a number of people (half a dozen?) for some time (8-12 months?) trying to turn it into a maintainable software product.
They failed.
As an example, I wrote a proof of concept script to show that we could automate some basic image analysis in my lab three years ago. That was immediately grabbed by an investigator and put into production without any further thought. Because it was a proof of concept script, it was of course very buggy and required substantial feature addition. This was added without any thought for design etc. Fast forward to today and this code base is a sprawling shit show which is being rewritten for the THIRD TIME. Each time has ended in failure because people failed to observe basic best practice, and this attempt will likely fail too. That is an ENORMOUS waste of investigator time. Another project I can think of involved a model which had a 10,000 line function. No one could trust what was being outputted by the thing, so they eventually abandoned it. That's hundreds of investigator hours down the drain.
In a way I also think this is a language problem. I hope that for some data-intensive projects productive statically typed languages (aka Swift + Tensorflow + Python interop) can help fix this.
I think this need, a real need that is, can potentially make millions for the enterprising type, here.
Why don't I try the same, you might ask. But, while I have some ideas, I am may not be able to raise the resources needed, at the moment.
Some of the best stats you get for telecom coverage will come from the annual reports of commercial telecom companies. Eg in Kenya you can look at safaricom. In somalia I guess you can look at Telecom Somalia and some of the others (keeping in mind their market share).
GSMA do research every few years on the number of SIMs on average per market. This is how they get to a figure about subscribers (ie people) rather than connections (number of SIMs). The latter is what is usually reported by commercial companies.
You can quite easily do some online digging for a given market to get fairly current numbers (ie last year). For doing anything with sizing markets etc I'll always start here. Not with World Bank or similar data which is really outdated and not indicative of current trends in technology at all (which is important for me).
The international development sector is unfortunately so slow to realise what's going on technologywise - i'd wager Facebook will know a huge amount more about populations in a lot of these African countries than their governments or the international development sector do. Perhaps if they were a bit more genuinely philanthropy inclined they could do some amazing socially impactful stuff right now. But I don't think any of us will count our chickens on that one.
For those that want to follow up on this, you could use current satellite imagery to make population estimates, but it wouldn't be really accurate. You could wait for companies like Planet Labs[1] to get daily photos of the Earth (IIRC 4m resolution), and increase accuracy. You could also hire pilots to strap a camera and survey the interested areas. There are also, IMO, unethical spins (especially in oppressive regimes) that you could do with that data (ie: civilian surveillance). Or even cheaper, use drones (because you don't need daily or real time surveillance). That could also have other spinoffs. Or you could literally just send people to go count. Don't know which would be the cheapest.
VoIP, especially multi-tenant voip like conferencing is 70% dependent on YOUR local network connection and 30% the provider's infrastructure. Having a high bandwidth pipe is no guarantee for 100% clear and jitterless voip communication. Have you had a network engineer take a deep inspection of voip traffic to see if there's any QoS or packet filtering going on that could degrade performance (i.e. if you're a small office that has your voice traffic occurring side by side with literally every other network device you're GOING to have a bad time).
You can join a concall via phone number + pin, from your desktop, or from the mobile app. The mobile app will even let you view screenshares or video conf.
The dedicated app runs on just about everything'; iOS, OSX, Windows, Android.
On phones you can use Video and Audio or just plain Audio (Low Bandwidth mode).
You can send your BlueJeans room URL and they can join with just a web browser (Web browsers get less features though). Non-Hosts can join the meeting without signing up for anything. They just click and join.
I think the audio and video is pretty good. Much better than Facetime, which is not a good comparison because Facetime is pretty terrible. Audio quality is good.
There is dedicated hardware that supports BlueJeans so you can use it in conferences rooms.
You can record meetings.
All the standard stuff like screen share and what not. You can even ping pong back and forth. I worked with someone where they had to type in some stuff and I had to type in other stuff on their computer. They just joined my BlueJeans room and when I was not typing, they could type. No clicking to take control or give it back, it was more or less seamless.
We want to plot big data (up to terabytes). Columns should be selectable by gui and nameable. The Data then should be be added to database with an ID. Everything should be usable without use of a scripting language.
Right now the terabytes of data have to be loaded in to ram just to see the first few lines and determine what the columns stand for. Now I know that there are editors that can load data partially but these have to be reinstalled which requires admin rights etc. This is a huge burden in a big company! The process of simply plotting, selecting and storing data takes a huge amount of time. The solution should be web based because no admin rights are availabe.
Often I am impressed how many tools and hacks exist simply to get one thing done: visualize measurement data. Excel is not enough because even the import of dot vs comma vs tab etc takes too much time and everytime has to be relearned. Engineers have to plot the data sometimes every few months and then you have a new excel version that autocorrects measurement data to dates or whatever.
In my opinion this would solve an obscene amount of work. Right now every engineer is hacking together some scripts that are extremely inflexible. When just csv-type data has to be handled.
Edit: this also applies to smaller amounts of data of megabytes. How can we plot them more robust than excel and then select x and y axis? I am pretty sure that we would love to buy a product that solves these issues.
- would You actually be interested to buy this service?
- what sort of visualizations do You actually make? Do they need to be interactive? SVG? Size? How do You use them?
- what exactly do You do with data before its plotted except selecting columns? Is there aggregation or any kind of processing?
- how often is this actually used because You say 'sometimes every few months', does it mean its like a quarterly report?
- what other well established tools have You used other than Excel?
- how big is Your largest data? Size, rows, columns
- if it applies with small amount of megabytes, is there a reason beside simplicity why You can't use PivotChart in Excel? Or Excel in general? Or R/Python to generate it?
I am data scientist who regularly plots quite large data sets, and I like speed :) Its totally doable to build a service that You can run locally, load a CSV, read like 1% of data, play with it.. when you get what You want, load rest and wait a bit and get the visualization You want.
But depending on visualization requirements there may be many paths solutions.
Count sketches, Reservior sampling and similar methods come to mind.
I think I have an email address in my profile; feel free to send me something. I am fairly certain that your needs can be satisfied with existing Unix tools. Then again, the reason I worked on the problem in the 1990s was to free up engineer time so they could do more valuable things. A gui and other tools could be worth paying for if the bosses have that mindset.
a) try to plot their data alone and spend time on hacking the stuff together. This takes time as the guys doing it aren't accustomed doing it daily. This happens accross all kind of divisions.
b) ask another team (with data scientists) for their support. Maybe the engineer has to write a ticket, or the person who should be doing it has other tasks, is in vacation, not willing, not replying to the request etc.
Either way hours are easily spent on solving this seemingly simple task. The amount of time spent is simply staggering.
Unix would also be my personal choice. But getting the right to put a unix machine into the network for a single user is extremely difficult. Windows, Internet Explorer and temporary admin rights are the work environment that almost everyone has to use. That's why I think a web based solutions is the only viable option.
1)Simple network automation platform that works for "My" custom environment, simply and effortlessly and also it should not break any existing network. (I don't mean like HP Network Automation)
2) Network diagram software - Seriously, any experienced network engineer will agree that this one needs a lot of disruption. Visio is very expensive and even then it is a pain to use. And Lucidchart or Cacoo or draw.io or other online tools too have their flaws/drawbacks.
3)Network monitoring tools - It is a pity that CA Spectrum, which is a ugly and non user-friendly tool, in my opinion, is among the most used network monitoring software. Network monitoring tools are bread and butter of NOC (Network Operation Center) teams.
4) Network devices configuration management tool and Change and topology visualization tool - Netbrain seemed promising in the start. But it seems to do too many things and has still room for improvement.
It is high time that more and more programmers should start building and contributing in network engineering field. There are numerous tools for each and every function. But there is lot of room for improvement in making those tools more elegant, easier to use and more reliable.
Yes, there is Software Defined Networking (SDN) where the vendors (Cisco, Silverpeak, Riverbed etc) themselves provide a nice visual dashboard. But the current "non-SDN" devices are going to stay for quite some time. And also why do we need to depend on one vendor and hence the Vendor provided dashboard? There will always be customers who would want vendor agnostic architecture and common tools to manage the infrastructure.
Note: A lot of the current tools (especially the ones I have mentioned above) do work very well and are used by large enterprises for a reason. But Tesla did disrupt the market of cars in its own way when reliable Toyotas and fast Ferraris already existed.
http://blog.ipspace.net/2018/03/presentation-and-video-real-...
I have been trying to find time to do exactly this. Have a plan laid out, but as is a popular quote : 'ideas are useless in and of themselves'.
Will post it here, should I ever realize my idea.
Possibly something that mixes business and process models into it, but again, something simple where you attach a single bpmn drawing and maybe an architectural sketch to the process. Add time management, deadlines and maybe a tie in to the web services of an ESDH system and it might even work for task management in case working.
Everything is build for theoretical approaches. Like we do SCRUM, but really, we’re doing scrumish things. We have an odd schedule, we work on multiple projects at once, depending on what resources are available and what has higher priority, sometimes something breaks and then we’re all doing operations rather than development, sometimes the mayor has a direct request and so on. I think we’ve tried all the tools from atlassian to trello and nothing fits, it’s all too textbook for a messy place like ours and often I think we should go back to postits and a fucking excel schedule but I really don’t want to ever print an excel sheet ever again.
Interestingly I do a lot of networking with other managers in the public set for, and everyone had this problem, not just in digitization. There isn’t a single efficient tool for managing your workforce in the public sector.
There are excellent tools, don’t get me wrong, but we can’t have our workers spend hours on them because we can’t sell those hours to anyone.
We've gotten a lot of feedback that often times the strict structures of scrum and kanban are overly burdensome, yet teams still want and need some basic guardrails (as well as the ability to modify their processes on the fly). Our Product team is still testing and iterating on this new project type quite a bit, and if you're up for it, we'd love to give you an early demo and get your honest thoughts and feedback.
If you're interested, please shoot me an email and we'll find some time for a demo: jake@atlassian.com
Jake
Jira PMM @ Atlassian
Originally we used JIRA, but it was complete overkill. We've settled on making adhoc GitHub Project kanbans and very creative use of the labels, and so far it's been ok, but not perfect.
The other thing we have to do is time tracking for specific tasks depending on the project/client, which has us going over to ConnectWise (the primary part of our company is an MSP), which is just terrible.
The final problem is tying all of the documentation together. The MSP side of the company uses ITGlue, but it's not enough for everything we do as developers. Confluence was actually nice for that, but since we've left Atlassian we're just tracking stuff using a doc folder attached to the project source itself.
I've never used cushion, but I like clubhouse a lot.
* We pay $200/mo for a basic website with forum, some billing things, some file storage and other stuff I never use. It looks like it was written 20 years ago. (If someone can recommend something already out there that would be helpful)
* Doorman accepts dozens of package deliveries each day which get sent email and tracked in above system when picked up. Needs to write apt number on box and have its own tracking system
* I have to approve lots of expenses not knowing what fixing the hvac unit should cost
* we're getting screwed by insurance company - I have no idea if our policy is good or not
* Insurance claims for damage is a huge s* show
* Energy management is horrible, we dont know where our electricity is going or how we can cut down
* Contractors are unreliable - I want to know who is blacklisted from neighbouring buildings because they suck
* How do our expenses compare to others? I have no idea.
Forum
Document storage
maintains emails lists
Track who's paid
+ more I don't really use it
There are a bunch here
As you can see the bulk of the problems in the health care industry is understanding how to navigate the huge mess of the US healthcare system. A longer term solution is for us to build a single payer system and incentivize patient care over patient procedures but I doubt that will happen.
Could a PMR system help with some of these issues? Giving patients access to their records making it easier to aggregate data abiding with HIPAA?
I recently saw an episode of Dirty Money on Valeant. Is Valeant a good representation for most pharma companies and also a reflection of most for profit medical businesses whose primary obligation is to increase shareholder value?
What would be the best way to start aggregating and comparing prices in the USA? If it isn't possible, why do prices vary so much?
Patient medical records won't help with pricing issues, but data transparency would help with automated solutions to manage care.
There is no easy way to aggregate and compare prices. If you could get billions of health care claims then you could build algorithms to estimate pricing or perhaps you could convince the large health care companies to share their pricing with you (highly unlikely). Prices very widely because there each institution negotiates their own set of prices with a specific set of providers. Basically imagine your health care plan is basically a set of discount codes for a set of doctors and providers. Except those discount codes are never shared with you and vary for each doctor and each provider and can change at any time.
I do think Zocdoc is a great idea because anything that makes scheduling an appointment easies is a win. I'm just saying that patient reviews give people a false sense of security that their doctor is good. In reality, almost no one knows if someone is good or not unless you have directly worked with that doctor and have a lot of stats on their outcomes (which only anyways works on specialities that have a lot of procedures). Assessing doctor quality is a very hard problem.
Patient reviews are good for assessing if your personalities will mesh but beyond that they don't go far.
I am not really too familiar with Oscar; what are they doing?
We actually explored this for a startup idea. The problem is that it is difficult to find a group of people who can do the evaluations in a truthful and holistic way:
* Hospitals will never want to give out outcome data because outcome data will be used against them for ratings by people who don't understand it (for example, some community hospital in Montana may be rated higher than Mass General because of case complexity issues). Or worse, it will be used by people who DO understand it :D
* We explored having doctors rate other doctors in a variety of ways (which I think would reflect the "truest" measure of quality). Residents and fellows could rate attendings, but they might not know how attendings in their hospital compare to attendings in most other hospitals. Additionally, attendings or hospitals might apply pressure to these groups to provide good ratings. Specialists could rate other specialists in their field, but then you might see collusion, false negative reviews, or retaliation. How you would avoid these problems is not immediately clear to me.
* As you point out, patients are really only able to evaluate bedside manner and not quality of care.
One way we thought about it is having a rating system which would have public profiles for physicians and anonynmous reviews from other physicians and members of the care team. Ratings coming from other physicians in the specialty and providers at their institutions would be weighted more than ratings from other physicians. The highest rated physicians would also have more weight within their specialty than an average rated physician. You could bootstrap the system by asking specialists to provide the names of the top X people in their field. These people would automatically be rated highly.
Patients could log in and provide comments about patient experience; hospitals could log in and provide outcome data if they wanted.
I am not entirely sure how you would really monetize it. It's the equivalent of the dating app problem; the better you are at matching, the less money you make as users exit the platform. I do agree that it would be great if osmeone could solve this problem though.
> Helping doctors and patients estimate costs - Neither doctors or patients understand costs. it is very routine for a doctor to suggest getting a lab test from X place because they have experience working with the center. They have no idea that for your specific insurance plan this will cost you 2x another place and so you with an unhappy patient who blames their doctor for ripping them off. There should be tools to patients and doctors estimate costs.
We had a startup that came at this in an indirect way. We were trying to make it easier for labs and other providers to perform eligibility checks and facilitate prior auths in real time. Our proposed solution would have involved running the check during ordering, and then having the phlebotomist or lab tech at the hospital doing the sample collection contact the doctor and inform them that something would or would not be covered by insurance. The provider could then talk to the patient about out of pocket costs etc. I think this actually could have worked; our team imploded before we could prototype it however :(
While I like the other ideas, glhf trying to get people to practice lifestyle management or be adherent to a treatment regimen XD
As an example, look upon the bloodied corpse of the failed startup Remedy, which was actually doing something really good -- helping users find billing errors and getting money back for them on bad charges. But incredible amounts of pushback stymied them:
https://www.fastcompany.com/40483774/remedy-wanted-to-cut-pe...
However, I have looked into incentivized, distributed platforms such as wings.ai or golem.network and believe that this could be applied to healthcare. If patients were incentivized on going to the doctor for a copy of the CPT/DGX codes on their bill and survey information around PQRS you would have a lot of useful data. This data could be used not only to provide a rating system of doctors, but also price comparison on CPT/RVU across the United States and the ability to provide PQRS data for doctors. Its an all in one, decentralized, healthcare platform for patients, doctors, and insurance providers.
We directly interface with those interested in the results of the check, and there is an overwhelming amount of work in building integrations with schools, applicant tracking systems, hospitals, public records, courts...
We spend most of our time building XML and JSON parsers to cram their data into our models.
If there was a company that provided a single interface to this data, you could write your own ticket. I know we aren't the only company in this space with this issue.
I used to be a management consultant. We often built financial models of company operations or parts of their value chain, and then looked at the change from process improvement, restructuring or bolting on new business lines. Everything was done in Excel. For the annual strategic planning and budgeting cycle, large companies used expensive proprietary systems to aggregate divisional financial plans.
I now work for a big bank, building out a Jupyter-based data science and machine learning platform. We have hooks in to SDLC with code reviews, commit history, and all the good stuff that software engineers nowadays take for granted.
So what if Finance departments dropped Excel and instead used our dev tools and methodologies? I'm genuinely curious if any companies are doing this, or if any startups are building such solutions.
Many spreadsheets are used as disposable report tools to support management level business decision making. While there are exceptions, in general perhaps they are more like one off report-generation shell scripts than unit operations in a larger business process. This distinction is significant, because rigor adds more value on automating processes than one-off reports, owing to increased lifecycle complexity.
At the management level, time is gold. These are people who have enough money, lots of responsibilities, and no time. They already have a tool that works. You would be essentially asking them to waste their most valuable resource investing in a new tool that may disappear tomorrow without a strong/clear ROI.
I don't doubt you could get some customers for such a product, but I'm skeptical it's going to change the paradigm. Platform-for-everything businesses (Google, Oracle, Microsoft, etc.) tend to have a large minimum snowball size.
I see things developing differently: an open source financial gateway will become the standard accounting interface to many businesses as trade moves toward greater transparency, predictability, speed and automation, and we see features like arbitrary asset settlement, multi-hop transactions, banking automation and multicurrency accounting becoming standard. Accounting departments will begin to thin out as forms on such a system become input to generate figures and reports previously generated manually. It will probably be hosted. We see a little of this now with cloud accounting systems, but I'd wager it will go a lot further with Germany's Industry 4.0 vision and a similar result in China. Supply chains will be the driver, there's just so much fat to trim.
Maybe it's just a feature, not a full product, but it makes any "learn to code" MOOC unusable.
Jest and Mocha both have a watch feature that will test a file each time it is saved for "continuous testing", much like using Gulp watch or any type of dev live server environment.
Also Jest features Snapshot testing which will take a picture of a UI and test all changes made to the UI, as well as alerting you in tests if the UI has changed. I could see this being used in bootcamps as well.
you install a cli program, write the code, type something like: learn test .... tests are ran and results show. if all tests pass then dashboard is updated showing tests are passing.
Edit: I am in the industrial space. Basically all large equipment purchases work via a [Quote > PO > Pro Forma Invoice > Final Invoice > Payment > Receipt] process.
What makes all of the existing offerings not usable?
Two majors problems around proactively managing an organization's spend culture:
1) Entropy: the pain around the problem is not acute and is a slow dissent into disorder. The problem gets harder to solve as organizations grow.
2) Chosen solutions are not adopted by the team and data around company spending is lost.
Solutions need to fit current workflow or have a dedicated champion with authority to insure adoption.
Especially with the recent push by the fed to put electronic logging systems in every truck, this system is absolutely ripe for disruption. Downside is you'll be fighting entrenched companies like IBM for ground.
Can you give a few more specific examples? Are you in the industry now? Working at a carrier?
An example 204 EDI (Load Tender) looks like this:
ISA*01*0000000000*01*0000000000*ZZ*ABCDEFGHIJKLMNO*ZZ*123456789012345*101127*1719*U*00400*000003438*0*P*>
GS*SM*4405197800*999999999*20111219*1747*2100*X*004010
ST*204*0001
B2**XXXX**9999955559**PP
B2A*04
L11*NONPRIMARY*OK
MS3*XXXX*B**M
NTE**FROZEN GOODS SET TO -10d F
N1*PF*XYZ CORP*9*9995555500000
N3*31875 SOLON RD
N4*SOLON*OH*44139
N7**NONE*********FF****5300
S5*1*CL*27800*L*2444*CA*1016*E
L11*9999001947*DO
L11*9999670098*CR
L11*9999001866*DO
L11*9999669887*CR
G62*69*20111218
N1*SH*XYZ CORP*9*9991555550000
N2*TERMINAL FREEZER
N3*5555 TERMINAL RD
N4*CLEVELAND*OH*44023
S5*2*PU*3042*L*312*CA*146*E
L11*9999001866*DO
L11*9999595358*PO
L11*9999669887*CR
G62*70*20090728
N1*ST*1 EDI SOURCE*93*9990055555
N3*31875 SOLON RD
N4*SOLON*OH*44139
OID*9999669887*99999595358**PC*312*L*3042*E*146
L5**FREIGHT
G61*IC*FEEDBACK*EM*FEEDBACK@1edisource.com
S5*3*CU*24758*L*2132*CA*870*E
L11*9999001947*DO
L11*9999008881*PO
L11*9999670098*CR
G62*70*20111218
N1*ST*1 EDI SOURCE*93*9990055555
N3*55555 5TH AVE
N4*MAYFIELD*OH*44244
OID*9999670098*999608881**PC*2132*L*24758*E*870
L5**FREIGHT
G61*IC*FEEDBACK*EM*FEEDBACK@1edisource.com
L3*27800*G*******1016*E*2444*L
SE*46*0001
GE*1*2100
IEA*1*000002104
I haven't proper parsed it, but I believe that's going from Cleveland to Mayfield. One of those L11 segments is probably a reference number. There's no MS1 segment so it's likely over the road? Anyway, it's not exactly descriptive or even human readable...A reply accepting a load looks like this:
ISA*01*0000000000*01*0000000000*ZZ*ABCDEFGHIJKLMNO*ZZ*123456789012345*101127*1719*U*00400*000003438*0*P*>
GS*GF*4405197800*999999999*20111219*1742*000000003*X*004010
ST*990*000000003
B1*XXXX*9999919860*20111218*A
N9*CN*9999919860
SE*4*000000003
GE*1*000000003
IEA*1*000000003
These are commonly exchanged as text files over FTP sites.Some of our more forward-thinking, larger customers are considering moving to AS2, which I believe is sent over HTTP vs FTP. A cursory Google search doesn't really turn up any clear examples on AS2, which doesn't exactly comfort me, but at least there's an RFC[0] for it, whereas for the X12 spec you have to pay[1] to see certain parts of it.
Not that anyone follows the "spec" anyway. We code special handling for every single one of our customer's EDI transmissions.
I wish everything was REST, or at least JSON. That would be 10x easier. Instead we spend weeks going back and forth on silly things like what a 07 means in the ATS segment, or what character to use for line endings (wish I was kidding -- we've been blocked for two months on the line ending character).
What's more is with the ELDs in all our trucks, customers are increasingly wanting GPS updates. I'd love to offer them a streaming socket with GPS data -- it's completely feasible considering our ELD backend. Instead everyone is wondering how we can send updates in 15 minutes increments over FTP, especially when these transactions are often batched in 5 minute loops on both ends in the first place.
It kills me a little. We could be doing so much more. I can't believe we aren't pushing for real time. I can't believe five to fifteen minute batching loops are acceptable.
[0]: http://www.ietf.org/rfc/rfc4130.txt [1]: http://www.x12.org/x12-work-products/x12-edi-standards.cfm
I also feel game development needs a way for creators to commission help with various aspects of the process too. Oh sure, there's the odd forum where you can pay for graphics assets or music, but what if your problems are code related? Or game design based? It's a lot harder to request that sort of thing online, let alone find a way to pay for it. Where can I say, hire a level designer or game programmer independently of a studio?
As far as I can tell, nowhere, which makes it awfully hard when I'm stuck and just need a bit of help to finish a mostly complete project.
Anyone who solves that would get a lot of my money, I'll say that much.
Upwork is by far not optimal but you can easily find talent for quick questions there.
If you are logging into services that can identify you, checking your personal email over a TOR connection, or doing work over a TOR connection, you're putting yourself and your company at risk.
Secondly, with VPN, I first have to connect to the open network in order to activate the VPN. I also need to do it for each device I want to connect (phone, computer, tablet).
https://www.amazon.com/Destroy-Tech-Startup-Easy-Steps/dp/09...
its a $600B industry that is in decline because its traditional R&D engine is sputtering out, and big pharma has been amazingly acquisitive the last five years to replace off-patent blockbuster drugs (more IPOs and big M&A than software the last 5 years despite getting 1/5 of venture funding)
tons of really interesting new tech for startups to explore: synthetic biology, cell and gene therapy, bioelectronic medicine, many many others
A mistake can mean - misconfiguring your target (wasting money on ads that won't give you an ROI), misconfiguring your 3rd party tracking (letting data like conversions go unaccounted for, or not having your auditing tags setup, meaning you show ads to fraudulent users that you otherwise wouldn't have to pay for), etc.
When installing these at scale in existing buildings you have to be able to send out local workforce to properly install and activate thousands of sensors, as well as maintain them afterwards, without prior training. It’s one of those things that sounds easy on the surface but is riddled with complexity, like how to register which sensor is installed where in a foolproof way, or how to easily locate faulty sensors for replacement.
There’s plenty of competition in people selling the sensors, providing connectivity, or doing data analysis, but I’m not aware of any solutions for installing the damn things which aren’t tied to a vendor.
The Problem
There are currently three main ways we discover an ever-growing amount of content on the web: news, social networks, and search. There is a fourth category that is missing: relevance—a break from the noise on the Internet to discover what's relevant to us.
News delivers what’s happening in the world right now. Social networks let us know what’s happening with our friends. Search is great at finding the needle in the haystack. But how do we discover things from around the web that are new and relevant to us?
Incentives on existing platforms are such that new and entertaining content wins. We need a better system that can filter the signal out of the noise.
The Solution
We’re building Preadr to tackle the relevance problem and bring forward quality content. A platform that helps you discover most relevant content based on your interests, for both, leisure as well as learning.
Every day we analyze an ever-growing amount of new links and create a storyline of the most relevant ones. We curate content from the most trusted sources on the internet and let our algorithms do their work to filter the relevant from the non-relevant. Since quality is not limited by the format of content, we offer a mix of different formats i.e. articles, videos, podcasts, etc.
small construction companies are still in the stone age. plangrid and submittal exchange exist but not much else is popular.
textura is owned by oracle, and everything else is owned by trimble and autodesk.
theres a plethera of attempts at field document management, and timecards, but 0 great medium-large business size erps. procore is like half an erp without an accounting system.
there is a huge untapped thirst for something that "just works" regarding labor productivity tracking and document management.
construction is one of the places where I think an enterprise blockchain could actually apply better than a traditional database. imagine a construction project with one blockchain, and every general contractor, sub, and vendor participating. shares, payments, todo, gantt charts, drawings, the model itself. they can all access the database from whatever supported client their firm uses (think email clients all working with each other) but on the backend working on one shared distributed database. I think you could turn down the bad actor security a bit, similar to https://azure.microsoft.com/en-us/blog/announcing-microsoft-...
* A project-based document management system that has baked-in version control.
* Issue and Task trackers.
* Soft realtime features such as notifications when models are converted, when anyone comments on an issue or task you've logged and a realtime chat system.
* A browser-based model viewer with the ability to:
* Federate multiple models from various project disciplines into one scene.
* Take screenshots of the scene, mark them up and log issues and tasks on model assets with the marked up screenshots straight away.
* Associate documents with model objects.
* Hold conversations on model objects.
* Store / review feeds from the built counterparts of modelled assets.
Video tutorials: https://www.youtube.com/channel/UC8xrkI2ZaSm-5s_aJnnGpeA/vid...
API documentation: https://app.rebim.co/static/docs/index.html
Intro for small to medium sized design studios: https://rebim.co
Intro for enterprise customers: http://rebimenterprise.com/
We've only just started beta testing this February but you're welcome to sign up for an account at https://app.rebim.co
I can be reached on lukebrooks [at] azurelope [dot] com if you need any assistance and we would also love to hear your thoughts on REBIM if you have any suggestions for improvements!
Our process and system is super efficient recording video + sending notifications to team members to edit, add captions, strip audio for your podcast, set up your podcast, set up your alexa flash briefing, etc.
But it takes hours to do all of this if youre on your own and thats ONLY if you know how to do it all. Content is the black box most have no idea how to do. If you don't pay our agency to do it for you you are kind of out of luck.
We sell a book on our process now and sell about 50 copies a week. These people are validated and want to learn how to do it, and are willing to pay to learn.
It only makes sense to build the platform that automates this process for these people and offer it to them. They've already paid to learn. Might as well offer the platform to do it.
From a business side of things: - loyalty program/rewards - pricing optimization - financial services to underserved customers including entrepreneurs, families, millennials, freelancers, etc.
High level programming language to program robots with safety in mind would be amazing. RoboDK is the only one that is doing this and they still suck.
What I want is a PLC: 1) A PLC which has a hard real-time process and soft real-time processes. Beckhoff and B&R do this, other PLC's do this as well. 2) I want to do the hard real-time programming in Rust or Ada, or "Safe/restricted C"... Rust and Ada are so much more expressive than structured text. 3) I would also like to have a simple API to allow deterministic, hard real-time communication between the real-time control domain and a soft real-time domain so that higher level languages can be used for control problems. And I would like the PLC to support one of the high level languages: CLISP or Racket, F#, Julia, Python, Elixir... don't care what it is. This is actually doable on Beckhoff but the tight connection between hard and soft real-time was not really there.. and beckhoff runs windows. Would prefer Linux, VXworks, or QNX.
Of course, installations are usually expected to function for ten to twenty years at a minimum...an order of magnitude greater than anything your typical js-framework-of-the-day considers.
Do you think it is interesting somehow?
Our marketing department sends 10+ campaigns/week, each goes through multiple changes and compliance approval. It’s very time consuming, especially when someone needs to touch the code.
> especially when someone needs to touch the code
Do you not generally touch the code? Is some kind of WYSIWYG editor used to create an email or is the code human written?
Edit: I'm in the computing industry. Have you heard of it?
All solutions out there are horrible....
However, that SaaS-led model is changing fast and we’d love to hear about your pain with single-tenant, on-prem solutions available today. Would you be open to a no-strings-attached chat? mashery-pm<at>tibco<dot>com
I'm from Venezuela, it is not a developing country right now. But it may be in the future.