


(FoRCE): Powering clinical trials research through a secure and integrated data management platform
Critical care units are one of the most data-rich environments in clinical settings, with data being generated by advanced patient monitoring, frequent laboratory and radiologic tests, and around-the-clock evaluation. There are substantial opportunities in linking data that are collected as a part of such clinical practice with data collected in a research setting, such as genome wide studies or comprehensive imaging protocols. However, security and privacy issues have historically been a significant barrier to the storage, analysis, and linkage of such biomedical data. Further, disparate technologies hinder collaboration across teams, most of which lack the secure systems required to enable federation and sharing of these data. This is particularly true when clinical practice or research designs require close to real time analysis and timely feedback, such as when dealing with streamed medical data or output from clinical laboratories. Current commercial and research solutions often fail to integrate different data types, are incapable of handling streaming data, and rely solely on the security measures put in place by the organizations that deploy them.
This proposal seeks to build FoRCE (Focus on Research and Clinical Evaluation), a scalable and adaptable add-on module to the existing Indoc Informatics platform that will address the critical gaps in cybersecurity and privacy infrastructure within shared clinical and research settings, while fulfilling important unmet needs for both the clinical and research communities. FoRCE will provide the secure architecture and processes to support the collection, federation and sharing of data from distributed clinical settings, including critical care units, clinical laboratories, and imaging facilities. The proposed platform will address several key issues including security considerations, infrastructure and software requirements for linkage, and solutions for handling streaming real time medical data, and ensuring regulatory and ethics compliance when linking diverse medical data modalities in a clinical setting.
FoRCE will be designed and developed with broad applicability in mind, and will therefore allow the different data types from numerous technologies and across multiple disease states to utilize the platform. The long term impact of FoRCE on improving the health of Ontarians is of course dependent on its utilization within research and clinical settings. An initial project which will utilize the platform as part of the testing and validation of FoRCE includes Dr. Maslove’s integrated approach to merging genomic and physiologic data streams from the ICU in the context of clinical research. FoRCE will enable Dr. Maslove’s team of critical care researchers to move beyond predictors of survival to focus on predictors of response to therapy, so that clinical trials in the ICU can be optimized to produce actionable evidence and personalized results. This will lead to better allocation of ICU resources, which in Canada cost nearly $3,000 per patient per day – $3.72 billion per year.
Industry Partner(s): Indoc Research
Academic Institution: Queen's University
Academic Researcher: David Maslove
Platform: Cloud, Parallel CPU
Focus Areas: Cybersecurity, Digital Media, Health


A cloud‐based, multi‐modal, cognitive ophthalmic imaging platform for enhanced clinical trial design and personalized medicine in blinding eye disease
Age Related Macular Degeneration is the leading cause of irreversible blindness in Canada and the industrialized world, yet there are no treatments for the vast majority of patients. Led by Tracery Ophthalmics inc, and working with Translatum Medicus inc (TMi) and academic partners at the Robarts Research Institute, Western University, and the “High Risk Dry AMD Clinic” of St Michael’s Hospital, we will engage SOSCIP’s Cloud Analytics platform, including servers, software and human resources, to accelerate the search for new treatments.
Specifically, Tracery has developed a novel functional imaging method, “AMD Imaging” (AMDI) that has already generated unprecedented pictures of the retina (the film of the eye) that include both known and unknown “flavours” of disease (the phenotype). These complex images will be compared against an individual’s genetic makeup (their genotype) and their concurrent illnesses, medications, and lifestyle history (their epigenetics). Further, Tracery’s imaging will help identify particular patients that will benefit from TMi’s drug development program, and ultimately help doctors choose which treatment will work best. Over the course of two years, we will involve increasing numbers of medical experts and their patients to generate and amass AMDI images, evaluating them over time and against other modalities.
Ultimately, through the “I3” program, we will work with IBM to train Watson and the Medical Sieve to recognize and co‐analyse complex disease patterns in the context of the ever‐expanding scientific literature. In short, we will leverage cloud‐based computing, to integrate image‐based and structured data, genomics and large data analytic to unite global users. We anticipate that this approach will significantly accelerate drug development, providing personalized treatment for the right patient at the right time.
Industry Partner(s): Tracery Ophthalmics
Academic Institution: Western University
Academic Researcher: Ali Khan
Co-PI Names: Filiberto Altomare, Louis Giavedoni & Steven Scherer
Platform: Cloud
Focus Areas: Digital Media, Health


A data-driven framework for integrating visual inspection into injection moulding pipeline
Recent advances in machine vision has led to new opportunities for automating that entire manufacturing pipeline. Consider, for example, the situation where an unattended computer vision system inspects the widget and decides whether or not to discard it. Even this little amount of automation can save many hundreds of person-hours on a typical factory floor. While for simple designs, we now have automated inspection methods relying upon lasers, 3D scanning or other imaging modalities that can decide if a widget has any defect. For complex designs, this ability remains elusive. More importantly, however, automated inspection schemes can only decide if a widget deviates from its intended design, say available in the form of a CAD drawing, it cannot decide what changes should be made down the manufacturing pipeline to prevent similar defects in the future. This project aims to explore machine learning techniques that integrate automated inspection with manufacturing process. Specifically, we will focus on injection moulding process in this project. We will develop new theory and methods for characterizing the injection moulding process in terms of quantities measured via an automated inspection system. This effort will lead to a deeper understanding of the role of myriad of parameters that control injection moulding processes.
Industry Partner(s): Axiom Group
Academic Institution: Ontario Tech University
Academic Researcher: Qureshi, Faisal
Focus Areas: Advanced Manufacturing, AI


A dynamic and scalable data cleaning system for Watson analytics
Poor data quality is a serious and costly problem affecting organizations across all industries. Real data is often dirty, containing missing, erroneous, incomplete, and duplicate values. It is estimated that poor data quality cost organizations between 15% and 25% of their operating budget. Existing data cleaning solutions focus on identifying inconsistencies that do not conform to prescribed data formats assuming the data remains relatively static. As modern applications move towards more dynamic search analytics and visualization, new data quality solutions that support dynamic data cleaning are needed. An increasing number of data analysis tools, such as Watson Analytics, provide flexible data browsing and querying abilities. In order to ensure reliable, trusted and relevant data analysis, dynamic data cleaning solutions are required. In particular, current data quality tools fail to adapt to: (1) fast changing data and data quality rules (for example as new datasets are integrated); (2) new data governance rules that may be imposed for a particular industry; and (3) utilize industry specific terminology and concepts that can refine data quality recommendations for greater accuracy and relevance. In this project, we will develop a system for dynamic data cleaning that adapts to changing data and rules, and considers industry specific models for improved data quality.
Industry Partner(s): IBM Canada Ltd.
Academic Institution: McMaster University
Academic Researcher: Fei Chiang
Platform: Cloud
Focus Areas: Cybersecurity, Digital Media



Achieving Circular Wastewater Management with Machine Learning
Effective wastewater treatment is essential to the health of the environment and municipal wastewater treatment plants in Canada are required to achieve specific effluent water quality goals to minimize the impact of human-generated wastewater on the surrounding environment. Most wastewater treatment plants include a combination of physical, chemical, and biological unit processes and therefore have several energy inputs to drive mixing, maintain ideal temperatures, and move water from one unit process to the next. Methane and other gases (biogas) and biosolids are generated during wastewater treatment. Both of these can be captured and repurposed for use within and outside of the wastewater treatment plant and can in some cases even be converted to revenue streams. Thus, biogas and biosolids are considered recoverable resources rather than waste products. Circular wastewater management (CWM) is an emerging approach that aims to optimize wastewater treatment, energy usage, and resource recovery. To achieve CWM, the operators of wastewater treatment plants must have a thorough understanding and reliable control of the different elements of the system. This is usually achieved using a combination of operator expertise, online sensors, and offline water quality measurements coupled with data collection, storage, and analysis software. Conventional statistical approaches have traditionally been used to analyze the data generated in wastewater treatment plans, be these approaches are not flexible enough to fully describe and control the complex chemical and biological processes underlying wastewater treatment or to help utilities achieve CWM. Machine learning approaches are more flexible than conventional statistical approaches. In this project, we will use machine learning tools to identify opportunities to move towards circular wastewater management in a wastewater treatment plant operated by the Ontario Clean Water Agency.
Industry Partner(s): Ontario Clean Water Agency
Academic Institution: York University
Academic Researcher: Stephanie L. Gora
Co-PI Names: Usman T. Khan, Satinder K. Brar
Platform: Cloud
Focus Areas: Cities, Clean Tech, Environment & Climate


Active learning for automatic generation of narratives from numeric financial and supply chain data
Large enterprises compile and analyze large amounts of data on a daily basis. Typically the collected raw data is processed by financial analysts to produce reports. Executive personnel use these reports to oversee the operations and make decisions based on the data. Some of the processing performed by financial analysts can be easily automated by currently available computational tools. These tasks mostly make use of standard transformations on the raw data including visualizations and aggregate summaries. On the other hand automating some of the manual processing requires more involved artificial intelligence techniques.
In our project we aim to solve one of these harder to automate tasks. In fact analyzing textual data using Natural Language Processing (NLP) techniques is one of the standardized methods of data processing in modern software tools. However the vast majority of NLP methods primarily aim to analyze textual data, rather than generate meaningful narratives.
Since the generation of text is a domain-dependent and non-trivial task, the automated generation of narratives requires novel research to be useful in an enterprise environment. In this project we focus on using numerical financial and supply chain data to generate useful textual reports that can be used in the executive level of companies. Upon successful completion of the project, financial analysts will spend less time on repetitive tasks and have more time to focus on reporting tasks requiring higher-level data fusion skills.
Industry Partner(s): Unilever Canada Inc.
Academic Institution: Ryerson University
Academic Researcher: Ayse Bener
Co-PI Names: John Maidens
Focus Areas: Advanced Manufacturing, Digital Media




Advanced Analytics and Fault Detection of Industrial Steam Traps via Machine Learning
Steam is flexible and very cost-effective in terms of gases that can be used to transfer heat. All steam systems require that the steam arrives at the right quantity, temperature, and pressure for it to work efficiently. The presence of foreign materials such as air, condensate, and other gases can lead to a reduction in this efficiency. To that end, steam traps are used to remove condensate, debris, and other gases from the steam system to maintain operating efficiency and prevent system damage. Due to constant operation, steam traps have a high failure rate of about 20%, according to the US State Department of Energy. Pulse Industrial’s steam trap sensor provides monitoring capabilities and early detection of these steam trap failures. This leads to vast improvements in the operating performance of partner facilities and reduced operating costs. This research project will optimize the analytics and diagnostics performance of currently deployed steam trap monitors. This optimization will be done by developing the AI model using collected data from partner plants. This will allow for a holistic improvement of the overall system, which is expected to increase the accuracy performance of the steam trap sensors
Industry Partner(s): Pulse Industrial
Academic Institution: Ontario Tech University
Academic Researcher: Xianke Lin
Focus Areas: Advanced Manufacturing, Clean Tech, Energy, Mining




Advancing the CANWET watershed model and decision support system by utilizing high performance parallel computing functionality
Watershed modeling is widely used to better understand processes and help inform planning and watershed management decisions. Examples include identifying impacts associated with land use change; investigating outcomes of infrastructure development, predicting effects of climate change. The proposed project will see the evolution of a desktop based watershed modeling and decision support system to a web based tool that will allow greater access by decision makers and stakeholders. By this means we will advance the idea of evaluating cumulative effects in the watershed decision making process rather than the current practice of assessing proposed changes in isolation.
The proposed software evolution will take advantage of high performance computing by porting existing code to a higher performing language and restructuring to operate using parallel or multi-core processing. The result is expected to be a dramatic reduction in simulation run times. Reduced run times will facilitate the use of automatic calibration routines used to conduct model setup, reducing costs. It will also enable rapid response if the simulation were to be re-run by a request through the web-based user interface. The designed web-based tool will be used by decision and policy makers in the watersheds that contribute to Lake Erie to understand the sources of pollution especially phosphorus which is a major contributor to Lake Erie eutrophication problems and develop policies in supporting a wide variety of watershed planning and ultimately help achieve the Federal and Ontario government commitments to reduce 40% phosphorus entering Lake Erie by 2025.
Industry Partner(s): Greenland International Consulting
Academic Institution: University of Guelph
Academic Researcher: Prasad Daggupati
Platform: Cloud
Focus Areas: Cities, Clean Tech, Digital Media, Water

Advancing video categorization
Vubble is a media tech company that builds solutions for trustworthy digital video distribution and curation. Using a combination of algorithms and human curators, Vubble searches the internet to locate video content of interest to its users. Vubble is collaborating with Dr. Vida Movahedi from Seneca’s School of Information and Communication Technology to develop a machine-learning algorithm that will automatically output highly probable categories for videos. With this algorithm implemented into the Vubble workflow to assist in automated video identification, Vubble will be able to better address their existing, and emerging, customer demands, while increasing their productivity and competitiveness. This video identification research project will be Vubble’s first step in understanding how to automate the identification of accurate video. The need for automation of videos curation is prevalent, as video is quickly becoming the world’s dominant form of media consumption, particularly for digital native younger audiences. Furthermore, the results of the applied research will aid Vubble in moving forward in addressing what they believe is a looming problem facing all media consumers, and society, the rising of fake news video created from archival footage.
Industry Partner(s): Vubble Inc.
Academic Institution: Seneca College
Academic Researcher: Vida Movahedi
Platform: Cloud
Focus Areas: Digital Media

Agile computing for rapid DNA sequencing in mobile platforms
DNA can now be measured in terms of electronic signals with pocket-sized semiconductor devices instead of 100-pound machines. This has started to transform DNA analysis into a mobile activity with the possibility to track and analyze the health of organisms at unprecedented levels of detail, time, and population. But the remote cloud-based computer services currently needed to process the electronic signals generated by these miniature DNA-meters cost over $100 to complete an initial analysis on one human genome and consume over 1000 Watts of power. Also, the cost of wirelessly transmitting measured data to these cloud-based analyzers can exceed $1000 per human genome. Further, reliance on external high-performance compute services poses a greater risk for compromising the security of the DNA data collected. This project proposes the construction of a specialized high-performance miniature computer – the agile base caller (ABC) – built from re-configurable silicon chips that can connect directly to the DNA-meter and analyze the DNA it measures in real-time.
The ABC, by virtue of its size, will preserve the mobility of emerging DNA measurement machines, and will enable them to analyze data for less than $1 while consuming less than 10 Watts. These cost/power performance improvements will significantly drop the barriers to the application of genomic analysis to non-laboratory settings. For example, they will allow continuous monitoring of Canada’s food supply for the presence of harmful biological agents with the possibility of cutting analysis delays from weeks to hours.
Industry Partner(s): Canadian Food Inspection Agency (CFIA)
Academic Institution: York University
Academic Researcher: Sebastian Magierowski
Platform: Cloud
Focus Areas: Health

Agile real time radio signal processing
Canadian VLBI capability has been missing for a decade. Jointly with Thoth Technology Inc we propose to restore domestic and international VLBI infrastructure that will be commercialized by Thoth Technology Inc. This project will implement and optimize multi-telescope correlation and analysis software on the SOSCIP BGQ, Agile and LMS platforms. The resulting pipeline package will allow commercial turnkey VLBI delivery by Thoth Technology Inc to domestic and international customers into a market of about $10 million/year
Industry Partner(s): Thoth Technology Inc.
Academic Institution: University of Toronto
Academic Researcher: Ue-Li Pen
Platform: Cloud, Parallel CPU
Focus Areas: Digital Media


An economics-aware autonomic management system for big data applications
Recent advancements in software technology, including virtualization, microservices, and cloud computing, have created novel challenges and opportunities on developing and delivering software. Additionally, it has given rise to DevOps, a hybrid team responsible for both developing and managing the software system, and has led to the development of tools that take advantage of the enhanced flexibility and enable the automation of the software management cycle. In this new world characterized by volatility and speed, the Business Operations (BizOps) team is lagging behind and still remains disconnected from the DevOps team. BizOps views software as a product and is responsible for defining the business and economic strategy around it.
The goal of the proposed project is to imbue DevOps tools and processes with BizOps knowledge and metrics through formal models and methods. Currently, BizOps receives the software system or service as a finished product, a black box, on which a price has to be put and be offered to clients. The price and the marketing strategy are usually defined at the beginning of a sales cycle (e.g. a year) and remain the same for the entirety of the cycle. However, this is in contrast to the great volatility of the service itself. In most cases, the strategies are based on the instinct of managers with high acumen and experience and broad marketing surveys or one-to-one negotiations with clients, information that can easily change and may remain disconnected from the software development. The end product of this project is a set of economic and performance models to connect the DevOps and BizOps processes during the software’s life cycle and eventually incorporate them in automated tools to adapt and scale the system in production and enable continuous development, integration and delivery.
Industry Partner(s): IBM Canada Inc.
Academic Institution: York University
Academic Researcher: Marin Litoiu
Platform: Cloud
Focus Areas: Cities, Digital Media


Industry Partner(s): Osisko Mining Corporation
Academic Institution: Western University
Academic Researcher: Neil Banerjee
Co-PI Names: Leonardo Feltrin
Platform: Cloud
Focus Areas: Digital Media, Mining





Automated Assessment of Forest Fire Hazards
The proposed project is to develop a Deep Reinforcement Learning (DRL) model capable of using Light Detection and Ranging (LiDAR) data to automatically analyze whether there is a risk of forest fire due to vegetation management around electric equipment. The model automatically classifies points in a LiDAR scan to determine whether there is direct contact between vegetation and electric equipment and outputs control actions for an Unmanned Aerial Vehicle (UAV) that can autonomously inspect infrastructure as needed. Our research will focus on the solution of two different but complementary machine learning problems that the DRL model will have to solve to allow for a UAV to autonomously acquire data with sufficient quality for fire risk assessment. Firstly, there is the problem of perception, i.e., to analyze raw LiDAR data and deduce classifications of individual objects and their shape and orientation. Secondly, there is the problem of control, i.e., to use the perceived state of the environment as determined by the first stage to adequately navigate through the environment until a complete scan has been obtained of the region that we are conducting a fire risk assessment for. Solving these two problems in a harmonious way that allows for an efficient, accurate and a complete scan of the environment is an open research problem in the domain of DRL that we aim to address.
Industry Partner(s): Metsco Holding Inc.
Academic Institution: University of Guelph
Academic Researcher: Lei Lei
Platform: Cloud, GPU, Parallel CPU
Focus Areas: Business Analytics, Clean Tech, Energy, Environment & Climate, Mining

Automated cytogenetic dosimetry as a public health emergency medical countermeasure
Biodosimetry is a useful tool for assessing the radiation dose received by an individual when no reliable physical dosimetry is available. Our Automated Dicentric Chromosome Identifier software (ADCI) has been developed to automate dose estimation of gamma and X-ray radiation exposures. Biodosimetry laboratories currently process these data manually, and the capacity to handle more than a few samples at the same time would quickly overwhelm the laboratories. The software has been developed to handle radiation exposure estimation in mass casualty or moderate scale radiation events. Federal biodosimetry and clinical cytogenetic laboratories have automated systems to collect digital chromosome images of cells with and without chromosomes exposed to radiation. We have developed advanced image segmentation and artificial intelligence methods to analyze these images. ADCI identifies dicentric chromosomes (DCC), a widely recognized, gold standard hallmark of radiation damage. The number and distribution of DCCs are determined and compared with a calibration curve of known radiation dose. Our ADCI software can also generate these calibration curves. Our software computes the dose received of one or more test samples and generates a report for the user. The desktop version of ADCI contains an easy-to-use graphical user interface to perform all of these functions. The supercomputer version of this software proposed here will be optimized to determine the dose for many samples simultaneously, which would be essential in the event of a mass casualty.
Industry Partner(s): Cytognomix Inc.
Academic Institution: Western University
Academic Researcher: Joan Knoll
Co-PI Names: Mark Daley
Platform: Cloud, Parallel CPU
Focus Areas: Health


Automatic Identification of Phytoplankton using Deep Learning
Under the impact of global climate changes and human activities, harmful algae blooms (HABs) have become a growing concern due to negative impacts on water related industries, such as safe water supply for fish farmers in aquaculture. The current method of algae identification requires water samples to be sent to a human expert to identify and count all the organisms. Typically this process takes about 1 week, as the water sample must be preserved and then shipped to a lab for analysis. Once at the lab, a human expert must manually look through a microscope, which is both time-consuming and prone to human error. Therefore reliable and cost effective methods of quantifying the type and concentration of algae cells has become critical for ensuring successful water management. Blue Lion Labs, in partnership with the University of Waterloo, is building an innovative system to automatically classify multiple types of algae in-situ and in real-time by using a custom imaging system and deep learning. This will be accomplished using two main steps. First, the technical team will work with an in-house algae expert (a phycologist) to build up a labelled database of images. Second, the research team will design and build a novel neural network architecture to segment and classify the images generated by the imaging system. The result of this proposed research will dramatically reduce the analysis time of a water sample from weeks to hours, and therefore will enable fish farmers to better manage harmful algae blooms.
Industry Partner(s): Blue Lion Labs
Academic Institution: University of Waterloo
Academic Researcher: Wong, Alexander
Focus Areas: AI, Environment & Climate


Big cardiac data
We propose to develop a cloud based image centered informatics system powered by newly developed big data analytics from our group for automatic diagnosis and prognosis of heart failure (HF). HF is a leading cause of morbidity and mortality in Ontario and around the world. There is no cure. Therefore early diagnosis and accurate prediction is very critical before the symptoms appears. However, previously lack of efficient image analytics tool has been the major pitfall to have an accurate prognosis and early diagnosis system. The system will greatly improve the clinical accuracy and enables accurate early diagnosis and prediction by intelligently analyzing all associated historical images and clinical reports. The final system will not only greatly reduce the sudden death and irreversible cardiac conditions, but also offers a great optimization of decision system in current healthcare. This project is based on strength built upon one successful SOSCIP project and two OCE projects.
Industry Partner(s): London X-ray Associates
Academic Institution: Western University
Academic Researcher: Li Shuo
Focus Areas: Cybersecurity, Health



Big data analysis and optimization of rural and community broadband wireless networks
Rural broadband initiative is happening in a big wave across the world. Canada, being a diverse country has a specific Internet reachability problem due to population being sparse. It is economically not viable to bring fiber to each and every house in Canada. It is not economically viable to connect every household through satellites either. Broadband Internet over wireless networks is a good option where Internet is brought over fiber to a point of presence and moved to houses over wireless.
EION is actively working in Ontario and Newfoundland to make rural broadband a possibility. Wireless networking in rural areas in Canada is a challenge in itself due to weather, terrain and accessibility. Real-time constraints such as weather, water and foliage do alter the maximum capacity of the wireless pipe. In addition the usage pattern of the houses, especially real-time video that require fast response time, require adequate planning.
This is becoming very critical as almost 80% of the traffic seems to be video related due to popularity of applications such as Netflix, Youtube and Shomi. Intelligence in wireless rural broadband networks are a necessity to bring good quality voice, video and data reliably. Optimization in system and network level using heuristics and artificial intelligence techniques based on big data analysis of video packets is paramount to enable smooth performing broadband rural networks.
In this project, we will be analyzing the big data of video packets in rural broadband networks in Ontario and Newfoundland and design optimized network design and architecture to bring reliable video services over constrained rural broadband wireless networks.
Industry Partner(s): EION Inc.
Academic Institution: University of Ottawa
Academic Researcher: Amiya Nayak
Co-PI Names: Octavia Dobre
Platform: Cloud
Focus Areas: Cities, Digital Media, Energy

Big data analytics for the maritime internet of things (IoT)
The Internet of Things (IoT) is an emerging phenomenon that enables ordinary devices to generate sensor data and interact with one another to improve daily life. The maritime world has not escaped to the influence of the IoT revolution. We are in the midst of a technological wave in which vessels are not the only ones carrying sensors (GPS or radar) anymore, but other maritime entities such as cranes, crates, boats, pickup trucks, etc. are being equipped with the same capabilities. This trend constitutes the backbone of the so-called Maritime Internet of Things (mIoT).
This project is about exploiting the tide of sensor data emitted by a myriad of maritime entities in order to improve both internal and collaborative processes of mIoT-related organizations; for instance, think of a Port Authority adjusting its berthing and unloading schedule upon receiving notice that a vessel has been delayed by harsh weather conditions. The challenge addressed by this research project is the generation of actionable intelligence for Decision Support using Big Data analytics. Actionable intelligence includes anomalies, alerts, threats, potential response generation, process refinement and other types of knowledge that improve the efficiency of a maritime-related organization and/or the manner in which it interacts with other similar organizations.
Industry Partner(s): Larus Technologies Inc.
Academic Institution: University of Ottawa
Academic Researcher: Emil Petriu
Focus Areas: Advanced Manufacturing


Closed-loop Design of Diquats for Use as Electrolytes in Redox Flow Battery Systems
The proposed project aims at a closed-loop discovery of organic redox flow battery electrolytes. In this continuous workflow, properties of organic molecules are predicted using machine learning (ML) models and/or computed using quantum mechanical models, which allows leveraging the costs of the experiments. Then, the lead candidates are synthesized and characterized using automated systems. Finally, the results of characterization are used for adjusting the computational models. We focus on a specific class of organic molecules –diquats– that show high redox reversibility and good chemical stability. A virtual screening pipeline will be developed using proprietary software provided by Kebotix Canada.
Industry Partner(s): Kebotix Canada
Academic Institution: The University of Toronto
Academic Researcher: Aspuru-Guzik, Alan
Platform: Cloud, GPU, Parallel CPU
Focus Areas: Advanced Manufacturing, Energy, Quantum