February 2020

Launch HN: Probably Genetic (YC W19): At-home DNA testing for every gene
14 by astroH | 2 comments on Hacker News.
Hi Hacker News, We’re Harley and Lukas from Probably Genetic ( https://ift.tt/2qqG7t6 ). We built an at-home physician-ordered DNA test that covers all genes and looks for pathogenic variants related to thousands of rare genetic conditions. Why rare genetic conditions? It may seem like a niche problem but there are ~400 million people worldwide with a rare genetic condition, half of which are currently undiagnosed. To put this into perspective, it’s estimated that 1 in 10 Americans has a rare condition and while each of the individual ~10,000 conditions is rare, the population that suffers such conditions is larger than cancer and HIV combined. Furthermore, for the patients that have been diagnosed, it takes on average 8 years for doctors to identify their conditions. You’ve already heard of rare genetic conditions, you might just not be aware of it. Remember the Ice Bucket Challenge for ALS? Most of these diseases initially look like more common conditions, such as autism, chronic pain, ADHD, or even the flu, before patients get worse. This diagnostic odyssey can be extremely costly for patients and in our experience, some are spending more than $30,000 and seeing more than 10 doctors before they get access to the right specialists and testing. We have seen this problem first-hand. Lukas is a rare-disease expert and worked on the world’s largest rare disease project (the 100,000 genomes project) as a PhD candidate at Oxford University in the UK. I am trained as a theoretical computational astrophysicist and during my PhD and fellowship at Cambridge University and Oxford University, I spent my spare time working with National Healthcare Service doctors developing and publishing medical diagnostics with machine learning. Our original idea was actually slightly different from what we have now. We spent a lot of effort on developing a symptom checker specifically for rare conditions with the idea to comb through existing medical records and flag patients with potential rare genetic conditions because, unsurprisingly, WebMD and others aren't really great for this purpose. As we were building this, we realized that for the patients we worked with, even if their symptoms were suggestive of a genetic condition, access to clinical-grade genetic testing was extremely difficult for many as it was either too expensive because insurance wouldn’t cover the cost, or they couldn’t find a doctor that would order it. Thus, we decided to use our expertise to both find these patients more efficiently and built up a service to drastically reduce the time and cost to access clinical-grade genetic testing. About the test: Just like most DNA tests, you can do this from home and it’s noninvasive, all we need is a little saliva. Unlike most DNA tests, ours is physician-ordered, sequenced in a CLIA-accredited and CAP-certified lab, the results are signed out by a licensed clinical lab director, all users have access to genetic counseling, and we try to incorporate as much phenotypic data as possible into the analysis. Our product is a whole-exome sequencing test with 100x coverage and covers all of the more than 20,000 genes, where 85% of known disease-causing variants occur. People always ask, how are you different from 23andme? Looking for a rare genetic condition is kind of like trying to find a typo in a novel. Using a 23andme (or similar) test to look for such a condition is like trying to find a typo in the first Harry Potter novel and stopping after 75 words. Those tests are just not meant for this purpose. Most are based on genotyping arrays that look for very specific variants at predetermined locations in the genome. However, the variants that cause rare diseases can occur anywhere. For example, there are over 1,700 different mutations in the CFTR gene that can cause cystic fibrosis. Approximately 85% of the known pathogenic mutations occur in the protein-coding regions of DNA called the exome. Our test is a whole exome sequencing test rather than a genotyping array, which allows us to cover all of the genes in a person’s DNA. We often get the question, why not do whole genome rather than whole exome? Right now it simply comes down to accessibility. For most consumers, whole genome sequencing is still too expensive and the additional gain in terms of coverage of pathogenic mutations doesn’t necessarily warrant the significant price increase. That being said, if you are interested in clinical-grade whole genome sequencing, we can soon offer this as well. Patient privacy is a huge concern for us and something we think about all the time. Quite simply, we will never share any of our users' data without explicit consent and we are more than happy to both delete our users' data and destroy their sample if requested. Interestingly, we often have the opposite problem where we receive inbound requests of people trying to share their data with us to see if we can help them. Out of the more than 10,000 rare diseases, over 95% do not have an FDA-approved treatment, which is why the rare disease community is so motivated to leverage their personal insights. We have started a waitlist to provide such services and are actively seeking ways to help these people and integrate them into our community, even if they have not had sequencing through us. Finally, how much does it cost? A single test right now costs $899 on our website, but we try to offer the test in trios where we sequence both the patient and two family members as this often gives a higher diagnostic yield. The latter option is $1,799. We expect that the test price will decrease significantly with time as the cost of sequencing drops and more of the analysis can be automated. We don’t currently accept insurance; however, in our experience, using more traditional channels to access this kind of testing can result in bills of >$10,000, not all of which may be covered. Many insurance providers don’t even cover this kind of testing, except for very specific purposes, despite more and more of the medical literature recommending exome sequencing as a first-tier diagnostic for specific indications. Ideally, we would make the product so affordable that it simply makes sense to use us rather than billing insurance for the test plus the numerous doctor and specialist visits needed before and after it’s ordered. We are currently offering the test at cost, as we aim not to profit off of the patients that need it most, and this is sustainable because, with the consent of the patients, we can leverage our data asset for drug discovery, clinical trial recruitment, and drug repurposing. Consumer genetic testing is growing rapidly as an industry and nearly doubling every single year. What is missing from this market is accessible physician-ordered testing that can genuinely help those with complex symptoms and undiagnosed genetic conditions. This is what we hope to provide. If you have any questions or feedback, we would love to hear it and please check us out at https://ift.tt/2qqG7t6 .

Show HN: Twitter lists for YC 2017, YC 2018, YC 2019 companies
2 by gajus | 0 comments on Hacker News.
These are all participating founders. It is a great way to learn what they are all up to now: * YC S2017 https://twitter.com/i/lists/1230735563034021893 * YC W2017 https://twitter.com/i/lists/1230735739861618691 * YC S2018 https://twitter.com/i/lists/1230734925881470976 * YC W2018 https://twitter.com/i/lists/1230735246502424577 * YC S2019 https://twitter.com/i/lists/1230728767556833280 * YC W2019 https://twitter.com/i/lists/1230734559743922178 Also: * YC S2020 https://twitter.com/i/lists/1230850014617862145 (will be populated in the future) * YC W2020 https://twitter.com/i/lists/1230849940101844999 (will be populated in the future)

Show HN: Runnaroo – A new web search engine
3 by chris_f | 1 comments on Hacker News.
In the sprit of releasing to the public early, I would like to share a search engine/portal I have been building in my spare time: https://ift.tt/2T4emlT Why: I started to not be able to easily tell the different between ads and organic search results in other search engines (even on DuckDuckGo), and I have been disappointed with recent UI changes in a lot of the major search engines. My main guiding rule when adding features has been to ask, "is it better for the user?" Important to know: I still believe Google currently returns the most relevant web results, so like StartPage I use Google's index as the base of the web results. Some interesting features: Deep Searching - The inclusion of relevant results from different vertical specific search engines depending on the search query. For example, if a user searches for 'python jobs NY', results from Indeed.com will be pulled in. The long term plan is to connect into the best vertical information sources for each type of query. Quick Directs - Automatic redirects for a limited number of commonly searched terms (facebook, google, etc.). This is similar to the Google "Feeling Lucky" button, but done by default. I found that most of the searches I did in Google were navigational. It's simple, but surprisingly has been the thing to resonate with normal (non-technical) users. Strict Search - Optional selection to force all the search terms in the query to be present in the web results. Full URL paths - The full URL path of the result is available on the SERP under a toggle. Privacy - The site doesn't do any tracking. There will most likely be a need to put some things in place at some point to prevent abuse, but user search data will never be used for ad tracking. The number of things I still have to add to the site is incredibly long, but I have gotten to the point where I have switched over to using it full time as my default search engine.

Launch HN: Syndetic (YC W20) – Software for explaining datasets
3 by stevepike | 0 comments on Hacker News.
Hi HN, We're Allison and Steve of Syndetic ( https://ift.tt/32nmmRL ). Syndetic is a web app that data providers use to explain their datasets to their customers. Think ReadMe but for datasets instead of APIs. Every exchange of data ultimately comes down to a person at one company explaining their data to a person at another. Data buyers need to understand what's in the dataset (what are the fields and what do they mean) as well as how valuable it can be to them (how complete is it? how relevant?). Data providers solve this problem today with a "data dictionary" which is a meta spreadsheet explaining a dataset. This gets shared alongside some sample data over email. These artifacts are constantly getting stale as the underlying data changes. Syndetic replaces this with software connected directly to the data that's being exchanged. We scan the data and automatically summarize it through statistics (e.g., cardinality), coverage rates, frequency counts, and sample sets. We do this continuously to monitor data quality over time. If a field gets removed from the file or goes from 1% null to 20% null we automatically alert the provider so they can take a look. For an example of what we produce but on an open dataset check out the results of the NYC 2015 Tree census at https://ift.tt/38UuiMN... . We met at SevenFifty, a tech startup connecting the three tiers of the beverage alcohol trade in the United States. SevenFifty integrates with the backend systems of 1,000+ beverage wholesalers to produce a complete dataset of what a restaurant can buy wholesale, at what price, in any zipcode in America. While the core business is a marketplace between buyers and sellers of alcohol, we built a side product providing data feeds back to beverage wholesalers about their own data. Syndetic grew out of the problems we experienced doing that. Allison kept a spreadsheet in dropbox of our data schema, which was very difficult to maintain, especially across a distributed team of data engineers and account managers. We pulled sample sets ad hoc, and ran stats over the samples to make sure the quality was good. We spent hours on the phone with our customers putting it all together to convey the meaning and the value of our data. We wondered why there was no software out there specifically built for data-as-a-service. We also have backgrounds in quantitative finance (D. E. Shaw, Tower Research, BlackRock), large purchasers of external data, where we've seen the other side of this problem. Data purchasers spend a lot of time up-front evaluating the quality of a dataset, but they often don’t monitor how the quality changes over time. They also have a hard time assessing the intersection of external datasets with data they already have. We're focusing on data providers first but expect to expand to purchasers down the road. Our tech stack is one monolithic repo split into the frontend web app and backend data scanning. The frontend is a rails app and the data scanning is written in rust (we forked the amazing library xsv). One quirk is that we want to run the scanning in the same region as our customers' data to keep bandwidth costs and transfer time down, so we're actually running across both GCP and AWS. If you're interested in this field you might enjoy reading the paper "Datasheets for datasets" ( https://ift.tt/3c494hD ) which proposes a standardized method for documenting datasets modeled after the spec sheets that come with electronics. The authors propose that “for dataset creators, the primary objective is to encourage careful reflection on the process of creating, distributing, and maintaining a dataset, including any underlying assumptions, potential risks or harms, and implications of use.” We agree with them that as more and more data is sold, the chance of misunderstanding what’s in the data increases. We think we can help here by building qualitative questions into Syndetic alongside automation. We have lots of ideas of where we could go with this, like fancier type detection (e.g. is this a phone number), validations, visualizations, anomaly detection, stability scores, configurable sampling, and benchmarking. We'd love feedback and to hear about your challenges working with datasets!

Show HN: Memoly, a subscription manager app built without code
3 by shaoy | 0 comments on Hacker News.
Hey HN community, I'm Sebastian, a Product Manager/Designer/Maker. I'd like for anyone with an idea to build and validate something quickly. I got into the #nocode thing earlier last year and wanted to see what's possible with the tools out there today. Over the last few months, I spent a lot of late nights and weekends on my side project. I'm very proud to present the result today. Meet Memoly (https://memoly.app). It's built entirely without code. To achieve my goal, I used this tech stack: Adalo, Zapier, Airtable, and Carrd. Memoly is available for iOS and Android. I came up with the idea to solve my own pain point. I have way too many subscriptions! When I forgot to cancel on time for a yearly plan, that renewal was costly. I wanted to create something that helps me to keep track of my spendings. Would love to hear your thoughts! Cheers, Sebastian

Show HN: Free startup ideas, analysis, validation and first steps
2 by brianthomas | 0 comments on Hacker News.
I started a newsletter sending out daily startup ideas. After a year, I transitioned to weekly ideas with more in-depth details. I recently posted all of them to the website and made them searchable. Check it out and let me know what you think. By providing more than just an idea, my hope is to help someone start their next business. https://ift.tt/2sLlCEU

Filed under: ,,,,,

Continue reading Prince Charles visits Aston Martin with his Aston, helps build a DBX, draws tabloid ire

Prince Charles visits Aston Martin with his Aston, helps build a DBX, draws tabloid ire originally appeared on Autoblog on Fri, 21 Feb 2020 13:20:00 EST. Please see our terms for use of feeds.

Permalink |  Email this |  Comments

from Autoblog Celebrities https://ift.tt/38PICGm
via IFTTT

Launch HN: Freshpaint (YC S19) – an automated, retroactive Segment alternative
2 by malisper | 0 comments on Hacker News.
Hello HN! We’re Fitz & Malis, the founders of Freshpaint (YC S19) (https://ift.tt/2ZVYN18). Our product is a more flexible way of setting up your analytics and marketing tools. With our javascript snippet, Freshpaint automatically instruments your site by tracking every behavior for you, up front. From there, you can create events for behaviors like clicks, pageviews, etc either through a point-and-click interface or code (whichever you’re more comfortable with). In one click, Freshpaint sends data collected for that event – past or present – to 80+ analytics or marketing tools. What does retroactive mean? Install Freshpaint’s snippet today. In 6 months start tracking something new, and you'll have the last 6 months worth of data that our product has already collected. We make it easy to backfill that historical data into your tools. There’s two types of people that get the most value out of Freshpaint: 1. The developer that owns data infrastructure at their company, and wants to lighten the load through automation. 2. The non-technical marketer/customer success/PM (or founder!) that makes use of the tools that require customer data. We both met while working at Heap (YC W13) – Malis led the database team and Fitz led product marketing. When starting Freshpaint, we were inspired by a phenomenon we saw while working with customers at Heap. Even though they used Heap for analytics, we kept seeing companies also writing tracking code for each behavior they wanted to use in other tools, either with a routing service like Segment and mParticle or building direct implementations and their own pipelines. Across analytics, product, and marketing it was common to see a dozen tools that required the same data including tools like Hubspot, Intercom, Fullstory, advertising platforms, data warehouses, and more. Let’s say you want to see how many users clicked your signup button or played a song in your analytics tools. Or you want to take the users who added an item to their cart and engage them in an automated marketing campaign. First, you have to write code to collect and log each behavior that you want to track. Then you have to send it to your marketing and analytics tools. This requires a massive engineering effort and it’s distracting to maintain (it’s not uncommon to delay shipping a new feature by 2-3 weeks because of this tax). If you didn’t track something or made a mistake, that data is lost forever. Developers have to do a bunch of work that (1) is not core product development, and (2) they often aren’t the ones to get value from that work because they’re not the end users of this data. Flip this problem around and you have marketers and PMs that are slowed or blocked from their work, and have to distract developers to get unblocked. This is painful for multiple teams. Fitz experienced this a few years back as part of the growth team at Quantcast, and he always had to work with engineering to instrument what he needed to trigger marketing flows and or get analytics telemetry on his experiments. We built Freshpaint to lighten the load and streamline the workflow for both groups. How it works: 1. Install Freshpaint’s javascript snippet on your site. It takes 60 seconds, and from that point Freshpaint collects every behavior like clicks, pageviews, etc. 2. Connect destinations like Google Analytics, Amplitude, Hubspot, Fullstory, Intercom, and a data warehouse. This is done by copying and pasting an API key or account ID. Complete integrations list here: https://ift.tt/39DeaQC. We plan to build more so let us know what you’d like to see. 3. Create events for clicks, pageviews, form submissions, and more from data in Freshpaint. Create events thru code or point-and-click in our UI. Data is retroactively available back to the day you installed Freshpaint, regardless of when the event is created. We also support manual tracking and server-side tracking. 4. Send data to the destinations we support in one click. You can even backfill past data that Freshpaint has collected. We're eager to hear your feedback, since we know HN has a ton of members who are familiar with this space from all different perspectives!

Show HN: Gripeless – Capture Complaints
3 by rsify | 1 comments on Hacker News.
Hey everyone, We all see these amazing ideas being posted on Show HN but let's be real - every product has some gripes that could be improved. The gripes could be things like broken flows, unintuitive UX behaviors, or some interactions being too slow - you name it. Everybody has their favourite products, and each of those products (awesome as they may be) have a couple of small things that could be improved to make them even better. What's interesting, though, due to a variety of reasons we usually don't report those things at all - we might not think the complaint is solicited, we might not think that anything will be fixed after we report the thing, or even just writing and sending a semi-formal letter is too much friction for such a simple task. This is why we've built Gripeless, an all-in-1 solution to make the process as smooth as possible for the user so that they don't have to write emails, don't have to think whether the product-owner is acting on those complaints or even requesting complaints in the first place. Gripeless can be installed onto any website and the provided script is only 20kB gzipped. Both the script and the management dashboard are built with Elm, which turned out to be an amazing choice for making things fast. We're specifically building a product that can capture complaints and efficient access to them, so Gripeless is not a replacement for: - Email support software - like Zendesk or Supportbee - Automated analytics - like Google Analytics or Mixpanel - Error reporting software - like Sentry or Rollbar Complaints, that's what we're all about. There's a lot of existing science behind complaints in the service industry which this product is based on, and we've linked a bunch of it on https://ift.tt/32fjHcJ. Let us know what you think. - Maciej [1] https://ift.tt/2SNZY13 Create your project over at https://ift.tt/32fjHcJ

Launch HN: App Brainstorm – Predesign Prototyping for Drafting Apps
2 by efortis | 2 comments on Hacker News.
HN, I'm Eric from App Brainstorm (https://ift.tt/3bWu8Gz). This project got me motivated because when making apps I wanted to: - Figure out the content, try flow alternatives, and edge cases at thinking pace. - Test a prototype I could interact with, and not have to memorize it, the case with graphic mocks. - Understand requirements without reading much. I hope you find it useful, and tell your colleagues about it. From time to time I'll blog about the app drafting subject, examples, reverse drafting, etc. If you have suggestions about those topics in general please let me know too. For private questions or comments, the site email: contact@

Launch HN: PostHog (YC W20) – open-source product analytics
7 by james_impliu | 0 comments on Hacker News.
James, Tim and Aaron here - we are building a self-hosted, open source Mixpanel/Amplitude style product. Visit the repo at https://ift.tt/39RhNlc After four years of working together, we originally quit our jobs to set up a company focused on tech debt. We didn’t manage to solve that problem, but we learned how important product analytics were in finding users, getting them to try it out, and in understanding which features we needed to focus on to impact users. However, when we installed product analytics, it bothered us how we needed to send our users’ data to 3rd parties. Exporting data from these tools costs $manyK a month, and it felt wrong from a privacy perspective. We designed PostHog to solve these problems. We made PostHog to automatically capture every front-end click, removing the need to add track(‘event’) - it has a toolbar to label important events after they’re captured. That means you’re spending less time fixing your tracking. You can also push events too. You can have API/SQL access to the underlying data, and it has analytics - funnels and event trends with segmentation based on event properties (like UTM tags). That means we’ve got the best parts of the 3rd party analytics providers but are more privacy and developer friendly. We’re thinking of adding features around paths/retention/pushing events to other tools (ie slack/your CRM). We’d love to hear your feature requests. We are platform and language agnostic, with a very simple setup. If you want Python/Ruby/Node, we give you a library. For anything else, there’s an API. The repo has instructions for Heroku (1 click!), Docker or deploy from source. We’ve launched this repo under MIT license so any developer can use the tool. The goal is to not charge individual developers. We make money by charging a license fee for things like multiple users, user permissions, integrations with other databases, providing a hosted version and support. Give it a spin: https://ift.tt/39RhNlc. Let us know what you think.

Filed under: ,,,,,

Continue reading These are the 10 coolest movie Porsches of all time

These are the 10 coolest movie Porsches of all time originally appeared on Autoblog on Thu, 20 Feb 2020 12:00:00 EST. Please see our terms for use of feeds.

Permalink |  Email this |  Comments

from Autoblog Celebrities https://ift.tt/2SNbGsJ
via IFTTT

Show HN: Asciibook – An eBook Generator for AsciiDoc
4 by chloerei | 2 comments on Hacker News.
Asciibook is an e-book generator tool that can convert AsciiDoc to e-books in HTML/PDF/EPUB/MOBI format. GitBook used to be the best open source e-book generator, but they abandoned the project at the end of 2018. E-book authors either continue to use GitBook without maintenance or write their own scripts. So I came up with the idea to write a new eBook generation tool to replace GitBook. Homepage: https://asciibook.org/ GitHub: https://ift.tt/2H9WmzA Features: - Supports generation of HTML/PDF/EPUB/MOBI. - Based on the command line, distributed via docker, works well with CI/CD. - HTML/CSS/JavaScript based theme system. - Support latexmath. For demonstrate, I generated an e-book using the source code of the "Pro Git", please visit https://ift.tt/2VdMjBE. This project has just been released, I will continue to optimize it. Welcome to use and provide feedback.

Launch HN: Goodcover (YC S17) – Cooperative Renters Insurance for Half the Price
11 by chrisplotz | 2 comments on Hacker News.
Hi HN, we’re Chris and Dan, co-founders of Goodcover ( https://ift.tt/2T4mCkG ). Goodcover provides renters insurance (only in California at the moment) and operates as a cooperative. We take a fixed fee on every policy, pool the premiums to pay claims, and then return what’s left over back to Members through an annual dividend. Thanks to this model and good technology we’re able to cut the price of renters insurance in half. I (Chris) worked in traditional insurance for 8 years. That time taught me to love insurance and how it picks people up after disasters, but it also gave me first hand exposure to the things people hate about it – the ever-increasing prices, the adversarial claims negotiations, the mountains of paperwork, and the byzantine decision making. All these inefficiencies kept us from really understanding and working for our policyholders – the people we were meant to serve. I became convinced technology was coming for this industry. I moved to SF in 2016, which is where I met Dan through a family friend. Dan had co-founded Cloudkick (YCW09) and was now looking to start another company. He also knew technology was coming for insurance, especially after his early career at IBM where he saw just how many “tech consultants” were placed in State Farm. Meeting him was a breath of fresh air – we started Goodcover in 2017 and got into YC right after. And then… it took us two years to get a product to market. We had opportunities to get going faster – you can get an agent’s license, buy off the shelf software, and sell other companies’ products in a matter of weeks. But they would be the same crappy, overpriced, adversarial products that everybody sells, and everybody hates. What good is that? Instead, we didn’t take the shortcuts and stuck it out to change the business model to cut the price in half, and Goodcover is the result. The story of how we did this starts with how insurance prices are made. If you have lots of claims data, you can run regressions to learn how underwriting factors like location, customer data, previous losses, etc all affect the frequency and severity of claims for every dollar of coverage you are providing. You then load that claims model with your expenses and desired profit margin, and boom you have insurance rates. You then have to get the Department of Insurance in each state you enter to sign off on your rates (not too low so you lose money, not so high that the government calls out your gouging). We knew technology would save us a lot on processing costs – with Dan’s technical background we knew that we could build technology that would run the business for a fraction of the cost that the typical industry vendors charge. We weren’t going to be saddled with huge agency forces or massive brand advertising. But, we didn’t have any claims data. Enter Quirk 1 of the insurance industry: all personal lines insurance pricing is public. Since all product and pricing is approved by the state government, to start something new you need to essentially reconstitute work other companies have done, proving that the elements you choose work for your target market. This is why most new insurance offerings are basically just another version of the pricing model sold by the “Insurance Services Office” (ISO – yes, that’s a company, not a government agency). It’s approved everywhere and used by everyone, so it’s a quick start. Lemonade uses ISO with one important modification: they set their minimum premium at $60 instead of $120, allowing them to claim an introductory price of $5 a month. If you buy more than the minimum coverage though, you’ll quickly get to “everybody else’s price” territory. However to cut the price without sacrificing coverage, we couldn’t use the same model that everyone else does. We needed a more granular model where we could charge the safest 99% people very little in exchange for charging the riskier 1% more. In my insurance career I had learned a lot about models designed for high-value homes, jewelry, cars etc – these models price in catastrophe (like hurricane and wildfire) very precisely to manage exposures in high-risk places. This granularity results in much lower prices for safer risks, since prices there don’t need to subsidize risky ones like they do in traditional “mass market” models. We decided to adapt these models to build our own that would be applicable to our target market, i.e. Renters. Our competitors that don’t use these models are in a bit of a pickle, because they can’t raise prices for high risks too fast thanks to regulatory constraints, meaning they have to keep prices high in safe places to balance the book. It also meant we could start with a coverage baseline that was better than the usual Renters Policy, including coverage for mold remediation, water damage originating from other peoples’ apartments, etc. We modernized the coverage, getting rid of extra coverage for things like oriental rugs and replacing it with more computer coverage. And critically we lowered the expense base, allowing us to offer huge savings thanks to the compound effect of lower costs and more granular pricing for our target market. The biggest apples-to-apples discount we’ve clocked so far is 71%. Custom model in hand, we were a critical step closer. But we still couldn’t offer insurance until we had the capital ready to pay claims. Insurance regulation sensibly makes sure that before you sell insurance to anyone you are adequately capitalized to pay your claims. Thus we set out to raise more capital. Here’s where we ran into Quirk 2, and something I should have known all along. Insurance Claims Capital is inherently not venturable. VCs look for 10-100x return on investment. But, as we grew the company, by law our claims capital reserves would always need to be at least 3x our expected losses (8x is normal). That geometric growth pattern is not scalable. Even if we raised over and over again, by the time we “made it big” we would be selling shares to normal investors, people who value insurance companies on a multiple of their cash on hand. Today’s insurance monsters have grown their cash base slowly for about 100 years. We found it was inefficient for us to own this capital, and therefore impossible to pitch. Money we raise should go to scaling our operations, not sitting around in case we had more claims than premiums. This meant we need to get claims capital partners – aka, rent it from insurance companies. We had hoped to avoid this because of quirk 3 of the insurance industry, “underwriting profits”. It used to be you could run an insurance business with huge expenses at a loss and make money because interest rates are great. No longer. The way insurance companies make money today is by keeping the difference of premium minus claims and expenses, or “underwriting profit”. This sets insurers up to be in conflict with their customers, and my experience in the industry showed me that if there was a root cause for all the reasons people hated insurance, this was it. So not only would we now need to ask insurers for help, we’d need them to give us their profit back to return to Members. This process took over a year. Fortunately we secured in-principle approval from a great reinsurer (TransRe) early on. The next step was to find a “primary” insurance partner to get us set up in California. We still have the chart of our emotions on the whiteboard in our office from that time – with huge ups (like when we moved to board-level conversations with one of California’s best cooperative insurers) to huge lows (like when those talks collapsed because that insurer’s agency force wouldn’t allow the channel conflict of a digital partner). Eventually we got it done, inking a three-party deal that worked for everyone, providing a more or less stable return for our capital partners but with the excess profit returning to Goodcover’s Members. With model and insurance capital partners in hand we then moved to get approval from California, which went as well as a process like that could, thank goodness. While we were working on these business objectives we built the necessary technology to service policies, quotes and maintain regulatory compliance. The easiest way to think of insurance is it works like an append only database. For instance, to remove coverage you would amend a person’s policy contract to remove a coverage, and so similarly in our tech stack we append an event that describes the changes to the policy which outputs a final policy. As a side benefit, this allows us to see the current state of a policy at any point in time. This model works well for us considering most, if not all, of our code is written in a functional language (Scala). Which brings us up to late last year when we wrote our first Goodcover policy. Honestly it’s been quite an ordeal, but we think the changes we’ve been able to implement have been worth it. I am so thankful for the hard work and persistence of the team, and for all the feedback and help the HN and YC community have given us over the years (shout out to anyone who remembers our “Advice” Show HN from 2018! - https://ift.tt/2zeHKh7 ). Renter’s insurance in CA is just the first step, (home and more states on the way) We have a ton more listening to Members to do, but we hope you enjoy the benefits – and are super grateful for any feedback you have!

Show HN: Hyvor Talk – a better way to add comments to blogs
2 by supz_k | 0 comments on Hacker News.
Hello HN! I'm Supun from Hyvor Talk (https://talk.hyvor.com), a commenting platform for websites. First, to be clear, Hyvor is the startup I'm building and Hyvor Talk is the first product of it. Here's how it started. I'm a PHP dev and I did a blog a little while ago. When it came to adding comments I didn't like the options available there for three reasons. 1. Some were not privacy-focused. They collected a bunch of user's data and placed ads on users' sites. 2. Most of them were not modern-looking. 3. Most of them were expensive. So, I created my own commenting system for my website with only comments and replies. After a few weeks, I thought it would be better if I could allow others to use this and solve the above 3 problems. So, in December 2019, I started working on this full time. Since then, I have developed many features (most of them are based on suggestions I got through our Discord server). Here are some features/facts about it. * It does not collect users' data. * It's fast because of no third party code, ads, etc. * It has a powerful AJAX-based moderation console. * It's completely free up to 40,000 page views per month. * It's fully customizable (colors, fonts, texts) - You can see some customized pages in our landing page. * It has a built-in spam detector. * I and my girlfriend (a graphic designer) worked hard to make the system as much as user-friendly and attractive. At the moment, I'm happy to say, 20+ websites use this commenting system actively. More than 1000 users have signed up as commenters. Thank you for checking this out! I'd love to hear your feedback.

Filed under: ,,,,

Elon Musk is not one to mince words, but he may have just lost a potential customer because of a cutting tweet. Gates and Brownlee have met before, and the idea was to have Gates discuss some of what the Bill & Melinda Gates Foundation has planned for this year, which marks the 20th anniversary of the organization. Unsurprisingly, the conversation touched on climate change and in pretty short order sustainable transportation, with Brownlee bringing up Tesla and asking if, when "premium" elect

Continue reading Bill Gates compliments Tesla; Elon Musk does not return the favor

Bill Gates compliments Tesla; Elon Musk does not return the favor originally appeared on Autoblog on Wed, 19 Feb 2020 08:44:00 EST. Please see our terms for use of feeds.

Permalink |  Email this |  Comments

from Autoblog Celebrities https://ift.tt/2SDDPCh
via IFTTT

Launch HN: API Tracker (YC W20) – Track and Manage the APIs You Use
3 by cameroncooper | 0 comments on Hacker News.
Hey HN! We’re Cameron, Trung and Matt from API Tracker (https://ift.tt/38GoNBc). We make tools to help with using third-party APIs in production. When software teams integrate with APIs they often run into outages, network issues, interface changes or even bugs that cause unexpected behavior in the rest of their system. These problems are hard to predict and prepare for so most teams don’t deal with them until there's a outage and have to do an emergency build to add logging and get to a root cause. This is exactly what happened to us. Trung and I are both software engineers and we spent a lot of time and energy trying to make our API integrations robust and reliable in production. We found ourselves instrumenting all our API calls so we could know how many calls we were making, how long they were taking and if they were failing. We set up alerts for errors and latency increases and integrated with PagerDuty. We wrote retry logic with exponential backoff. We wrote failover from one API provider to another. At the end of it all we built a lot of tooling that required maintenance and wasn’t even applied uniformly across all of our integrations. After building all this infrastructure we realized that most other teams are reinventing the same wheel. To solve this problem we built an API proxy that takes requests and relays them to the API provider. By proxying this traffic we are able to instrument each call to measure latency, record status codes, headers and bodies, and add reliability features like automatic retry with exponential backoff. From there we can monitor and alert on issues and provide a searchable call log for debugging and auditability. We knew that because we were asking teams to run their mission critical API calls through us that we had to build a highly available and scalable proxy architecture. We’ve done this by designing a proxy that can be distributed across multiple regions and clouds. We are currently running out of AWS. Global Accelerator allows us to use their private internet backbone to quickly get traffic to our proxies which run behind AWS Network Load Balancers. While this can help us ensure resilience against infrastructure outages, we also need to protect against self-inflicted wounds like bugs and bad deployments. Upon release we bring up a new set of proxy instances, deploy the code, and run our full test suite to make sure that each instance is able to proxy requests correctly. Once all instances are healthy they begin to go into the load balancer. For companies with more stringent needs we support on-premise installations as well as a client-side SDK that can do instrumentation without the proxy. Today we offer the service as a subscription. We hope to make it easy for teams to get visibility and control across all their integrations without having to build it themselves. This includes: - Detailed logging on all of their third-party API calls - Monitoring and alerting for increased latency and error rates - Reliability features like automatic retry, circuit breaker and request queueing - Rate limit and quota monitoring We would love to hear from the community how you are managing your API integrations. Our story is a result of our experiences and how we dealt with them, but we know the HN community has seen it all. We really would love to hear from you about problems you’ve had and how you dealt with them. Please leave a comment or send us an email to founders@apitracker.com

Show HN: CoveTrader – Combines multiple crypto exchanges into one trade platform
2 by knudsen80 | 0 comments on Hacker News.
Our dev team recently launched CoveTrader at https://ift.tt/2V1JF1Z. Using our extensive experience from traditional financial markets, we connect to many crypto exchanges (Coinbase, Kraken, Bitstamp, etc.) and show aggregated order books, trade lists, and analytics in both real-time and historical formats. We currently support BTC, ETH, LTC, BCH, ETC, EOS, and XRP, but are adding more. It's currently just analytics, but we'll be adding order sending in Q2. Feedback and feature requests are welcome and can be sent to sknudsen@covemarkets.com.

Show HN: FounderPhone – make customer support personal with SMS
25 by parthi | 0 comments on Hacker News.
Hi HN! We're Parthi and Kunal from FounderPhone (https://ift.tt/39LIHew). FounderPhone is a shared customer support inbox for SMS and calls in Slack. We've built and shipped 7 products recently and if there's one thing we've learned, it's that your personal relationship with your customer is your secret weapon as a startup. Having customers email support@company.com or message a bot via Intercom doesn't feel personal. People are skeptical they will ever get a response. We've had a lot of success giving out our phone number to customers we really care about and telling them to text us whenever something comes up. Apparently, lots of great founders like Patrick from Stripe did this while growing their startups. The problem is that phone numbers aren't really meant for customer support and it gets overwhelming pretty quickly. So we hacked together a solution for ourselves where we made Slack a shared inbox. When a customer texts or calls me, our team can also see the messages and incoming calls. We can discuss how best to handle the issue in Slack and then anyone can respond via text. For calls, anyone available can redirect calls to their own number. From the customer's perspective, they're just texting a single number. They're not frustrated with messy tickets or being routed to 3 different people. They will always read your responses because it's in their SMS inbox instead of being lost amongst their 20,000 unread emails. This is just the start! We're looking into building a whole suite of software to make customer support feel both personal and immediate. We're making an integration with Segment and Sentry to alert you when a customer has an issue so you can reach out to them about it before they complain to you. Text our FounderPhone (510) 756-2522 with your name or email founders@founderphone.com if you have any questions. Thanks for checking us out!

Filed under: ,

Dale Earnhardt Jr. spent decades taking risks on the track and in the air. Earnhardt said Sunday before the Daytona 500 that he’s changed his approach to flying following a harrowing crash landing near Bristol Motor Speedway last August. Earnhardt, his wife Amy, daughter Isla, dog and two pilots escaped the fiery jet in east Tennessee.

Continue reading Dale Jr. dives into the details to get over fear of flying after jet crash

Dale Jr. dives into the details to get over fear of flying after jet crash originally appeared on Autoblog on Mon, 17 Feb 2020 18:16:00 EST. Please see our terms for use of feeds.

Permalink |  Email this |  Comments

from Autoblog Celebrities https://ift.tt/38Dzgxg
via IFTTT

Launch HN: Motion (YC W20) – defense against online distractions and addictions
8 by qiyuxuan96 | 0 comments on Hacker News.
Hi Everyone, It's Harry, Ethan, and Omid here from Motion ( https://inmotion.app ). We built a Chrome extension that uses real-time interventions to prevent people from unknowingly wasting time on online distractions. A few months ago, I mentioned that I was spending too much time on Facebook. Omid recommended using StayFocusd to block certain sites. It worked well - my time wasted dropped to 15 minutes the next day. However, a few days later, I was setting up my company’s Facebook page, and StayFocusd came in at the 15-min mark, the time I set for myself. I needed to finish that page, but there was no way around the hard-block from StayFocusd, so I had to uninstall the extension. Later, I tried several extensions like RescueTime and BlockSite. Each was either so permissive that it wasn't useful, or so strict that I had to uninstall it. We realized that existing solutions did not work because their approach is too prescriptive and simplistic. They didn’t recognize that people need to use Facebook or Youtube for legitimate purposes. The problem is really intricate. On one hand, Facebook is great for getting reminders on friends’ birthdays or managing business pages; on the other hand, every minute spent on Facebook could potentially lead to a trap. These traps come in all forms - video autoplay, news articles with catchy titles, and sponsored content that looks just like your friends’ posts. Instead of always being hindered from visiting these sites, I needed to have access to their useful parts, but be careful to not get distracted in the process. I decided to build a simple tool for myself - a countdown timer each time I visit a distracting site. We all started using it and liked it, so we decided to hand out the extension to some friends. Surprisingly, despite many bugs, our user retention was still infinitely higher than our previous ideas. In fact, we built 6 MVPs during our pivoting process - commission-free prediction market, recruiting platform for quant traders, intercity carpooling service, workplace motivation app, online travel agency, and crypto options market making (last one because both Ethan and I were options traders before our startup; Omid was a college student until this year. For backstory - Ethan and I were best friends in college, and Omid and I have been friends since high school) Since none of these ideas had worked and we were finally getting some users, we decided to work on this one. Also, with this one we were solving a problem that we ourselves had. Here’s how it works now: each time you visit a distracting site (e.g. Twitter), we show a screen where you can choose to either leave or proceed to the site with a visible countdown timer. On sites like Facebook and Youtube, you can choose to hide the newsfeed or video recommendations. Once time is up, we ask you whether you're done. When you visit less distracting sites such as Gmail, you get reminders on how long you’ve been on these sites, so you don't unknowingly spend too long on things like responding to email. Before you start working on something, you can write down your task, and it will show up with a timer on every tab you visit until you clear the task, so you don't get sidetracked. Finally, every morning, we give you a report on how you spent your time the previous day, and allow you to mark the sites that are distracting. We firmly believe in data privacy, and promise that we will never sell user data. We do not collect the URL or content of sites you visit. We had to decide between using Chrome's "all_urls" permission and the more narrow "activeTab" permission. If we only had activeTab, each time the user opens a new page they would have to manually activate our extension. That would be an unacceptable user experience in our opinion, so we settled on the broader permission. The extension is free at the moment. We plan on monetizing either through a premium tier with productivity tools built for power users or charging a very low amount from every user. Big tech companies have been attacking our attention with sophisticated technology, spending billions of dollars to optimize their engagement metrics. We may think we are in control, but often, we are unknowingly being exploited by companies who profit handsomely off our attention, which, if you think about it, is the most valuable asset we have. If we could just simply turn off all these products, that would be an effective defense, but for many people that's not an option. We believe we're on the road to building a more useful tool to help individuals defend their attention against these traps. It's still far from complete. This is a problem many in the HN community have thought a lot about. We’d really love your feedback and learn what you would like to see in a tool like this - what productivity problem do you have that a tool could help solve? In what other ways could tooling help to strengthen our control over our own attention? Thanks so much in advance. Harry, Ethan, and Omid

MKRdezign

Contact Form

Name

Email *

Message *

copyright webdailytips. Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget