Nadia is the Founder and CEO of Atenic Lab. She received her PhD in Theoretical Cosmology from the University of California, Davis where she researched early universe physics, inflationary theories and quantum information.
With over 5 years experience leading initiatives on the Data Science R&D team at the world’s largest temperature controlled warehousing company, Lineage, she founded Atenic Lab in order to employ data driven and science based solutions and changed management strategies, to a wider range of heavy industry, agriculture, food production and construction related businesses.
Morgan received his PhD in Particle and Nuclear Physics from the University of California, Davis where he worked on the LBNE / Dune, Hyper-Kamiokande, Watchman, and Snoplus experiments both in instrumentation, and data modeling.
Working as a post-doctoral scholar and project scientist at UC Berkeley and Lawrence Berkeley National Lab, and as a data science expert in the logistics-algorithms team at Stitch Fix, he has over 14 years of combined experience developing data pipelines and deploying machine learning models to solve problems on continuous data streams.
Our strategic goals are to ensure that proposed solutions are developed and implemented holistically, considering both their direct and indirect impact on all relevant parts of the business. We prioritize measuring and reporting the deployed solution’s impact—whether financial, operational, or otherwise—on company revenue and desired goals. Finally, we ensure the longevity of each solution beyond Atenic Lab’s involvement by establishing the necessary resources and procedures to support its continued success.
We take a holistic, big-picture approach to problem solving and solution implementation, ensuring, to the best of our ability, the best possible outcomes across the business while avoiding trade-offs that compromise other areas. We begin by gaining broad knowledge about the industry sector as a whole and the incentive structures that drive it before diving into technical problem solving. No solution can be implemented successfully if it does not converge with the incentives of the business.
Every business generates data, whether it gets recorded on a computer or not. Just like any resource, if data is left unutilized, or is manipulated in such a way that it produces misleading conclusions, it will not help generate value for the business. We aim to make the most of this resource, data, by:
Mining the resource: Much of the data in the heavy industry and supply chain space is not consolidated in one location, and some of the most useful data is not even digital. We start by uncovering all necessary data sources, collecting and ingesting said data into data stores.
Refining the resource: We clean and standardize the data where necessary to ensure it is readily usable for multiple use-cases by stake-holders.
Testing and characterizing the resource: Now that the data is in a usable format, we analyze it to get quantitative insights on how the system is performing. Joining datasets that have historically been siloed more often than not uncovers opportunities to add value, by reducing waste, increasing efficiency or improving the resiliency of the system.
Using the resource to prototype and deploy a solution: Once we have identified the most significant opportunities for the business we plan and build out solutions. These can be optimizations, control systems, finding and vetting new tech or simply analytics tools.
Resource flow to generate value: Ultimately any technical solution or insights gained by analyzing data are of no use unless the important information related to these does not flow to the right parts of the organization. There can be significant waste reduction and efficiency gains just by ensuring that relevant and actionable information is transmitted to the right people at the right time. We therefore work with our customers to build pipelines and protocols that curates information flow to strategic parts of the business.
Sustaining value: Finally, whenever needed, we implement data driven change management to ensure adoption and sustainment of new procedures and technologies.
We always connect each solution and its outcomes to the business’s bottom line and long-term success; prioritizing high-impact, low-investment opportunities first, before moving on to more complex or technical improvements. We implement this approach by calculating data informed ROIs and building out the complete business case for the solution.
To validate and measure the effect of solutions, we develop a M&V framework and identify metrics that identify performance (these could be energy, operational or other metrics depending on the use case) of the system and that will clearly help root-cause issues that arise either as a result of the project in question or later on as part of regular operations.
We emphasize stakeholder buy-in and carefully consider the impact of proposed solutions on all stakeholders and areas of the organization. We strive to:
Understand stakeholder incentives and align them as much as possible across all individuals and organizations involved.
Communicate and gather feedback at all levels, focusing on information that is relevant and understandable to each stakeholder group.
Create a cohesive storyline to support the business case and implementation of solutions.
Implementing new solutions always comes along with some amount of risk. We leverage all the strategies mentioned above to ensure the project is successful, however to further mitigate unexpected hurtles in deployment and scaling we take a multi staged implementation approach (usually 3-phases but may vary depending on case):
Proof of Concept Phase: In this phase we build out the solution at a smaller or restricted scale, and/or rollout the solution to one or two facilities, users or use-cases. In this phase we validate the viability of the solution or technology as a whole and collect feedback and performance results (both technical and related to the implementation itself) to improve in the following iteration.
Proof of Enterprise: In this phase we expand the scope of the solution to either include more features or to be deployed to more sites, users, or use-cases, selected specifically to represent a diverse and representative swath of sites, users, etc. Here we aim to measure how well the solution functions when applied to these diverse scenarios and decide what features may need to be changed to improve performance.
Scaling: In this final phase the solution is scaled to include all features or to be deployed to a larger group or all sites, users, use-cases etc. only after any predominant issues have been addressed in the first two phases.