


(FoRCE): Powering clinical trials research through a secure and integrated data
Critical care units are one of the most data-rich environments in clinical settings, with data being generated by advanced patient monitoring, frequent laboratory and radiologic tests, and around-the-clock evaluation. There are substantial opportunities in linking data that are collected as a part of such clinical practice with data collected in a research setting, such as genome wide studies or comprehensive imaging protocols. However, security and privacy issues have historically been a significant barrier to the storage, analysis, and linkage of such biomedical data. Further, disparate technologies hinder collaboration across teams, most of which lack the secure systems required to enable federation and sharing of these data. This is particularly true when clinical practice or research designs require close to real time analysis and timely feedback, such as when dealing with streamed medical data or output from clinical laboratories. Current commercial and research solutions often fail to integrate different data types, are incapable of handling streaming data, and rely solely on the security measures put in place by the organizations that deploy them.
This proposal seeks to build FoRCE (Focus on Research and Clinical Evaluation), a scalable and adaptable add-on module to the existing Indoc Informatics platform that will address the critical gaps in cybersecurity and privacy infrastructure within shared clinical and research settings, while fulfilling important unmet needs for both the clinical and research communities. FoRCE will provide the secure architecture and processes to support the collection, federation and sharing of data from distributed clinical settings, including critical care units, clinical laboratories, and imaging facilities. The proposed platform will address several key issues including security considerations, infrastructure and software requirements for linkage, and solutions for handling streaming real time medical data, and ensuring regulatory and ethics compliance when linking diverse medical data modalities in a clinical setting.
FoRCE will be designed and developed with broad applicability in mind, and will therefore allow the different data types from numerous technologies and across multiple disease states to utilize the platform. The long term impact of FoRCE on improving the health of Ontarians is of course dependent on its utilization within research and clinical settings. An initial project which will utilize the platform as part of the testing and validation of FoRCE includes Dr. Maslove’s integrated approach to merging genomic and physiologic data streams from the ICU in the context of clinical research. FoRCE will enable Dr. Maslove’s team of critical care researchers to move beyond predictors of survival to focus on predictors of response to therapy, so that clinical trials in the ICU can be optimized to produce actionable evidence and personalized results. This will lead to better allocation of ICU resources, which in Canada cost nearly $3,000 per patient per day – $3.72 billion per year.
Industry Partner(s): Indoc Research
PI & Academic Institution: David Maslove, Queen's University
# of HQPs: 3
Platform: LMS
Focus Areas/Industry Sector: Cybersecurity, Digital Media, Health
Technology: Real-Time Analytics


A cloud‐based, multi‐modal, cognitive ophthalmic imaging platform for enhanced clinical trial design and personalized medicine in blinding eye disease
Age Related Macular Degeneration is the leading cause of irreversible blindness in Canada and the industrialized world, yet there are no treatments for the vast majority of patients. Led by Tracery Ophthalmics inc, and working with Translatum Medicus inc (TMi) and academic partners at the Robarts Research Institute, Western University, and the “High Risk Dry AMD Clinic” of St Michael’s Hospital, we will engage SOSCIP’s Cloud Analytics platform, including servers, software and human resources, to accelerate the search for new treatments.
Specifically, Tracery has developed a novel functional imaging method, “AMD Imaging” (AMDI) that has already generated unprecedented pictures of the retina (the film of the eye) that include both known and unknown “flavours” of disease (the phenotype). These complex images will be compared against an individual’s genetic makeup (their genotype) and their concurrent illnesses, medications, and lifestyle history (their epigenetics). Further, Tracery’s imaging will help identify particular patients that will benefit from TMi’s drug development program, and ultimately help doctors choose which treatment will work best. Over the course of two years, we will involve increasing numbers of medical experts and their patients to generate and amass AMDI images, evaluating them over time and against other modalities.
Ultimately, through the “I3” program, we will work with IBM to train Watson and the Medical Sieve to recognize and co‐analyse complex disease patterns in the context of the ever‐expanding scientific literature. In short, we will leverage cloud‐based computing, to integrate image‐based and structured data, genomics and large data analytic to unite global users. We anticipate that this approach will significantly accelerate drug development, providing personalized treatment for the right patient at the right time.
Industry Partner(s): Tracery Ophthalmics
PI & Academic Institution: Ali Khan, Western University
Co-PI Names: Filiberto Altomare, Louis Giavedoni & Steven Scherer
# of HQPs: 2
Platform: Cloud
Focus Areas/Industry Sector: Digital Media, Health
Technology: Artificial Intelligence, Image/Video Processing


A dynamic and scalable data cleaning system for Watson analytics
Poor data quality is a serious and costly problem affecting organizations across all industries. Real data is often dirty, containing missing, erroneous, incomplete, and duplicate values. It is estimated that poor data quality cost organizations between 15% and 25% of their operating budget. Existing data cleaning solutions focus on identifying inconsistencies that do not conform to prescribed data formats assuming the data remains relatively static. As modern applications move towards more dynamic search analytics and visualization, new data quality solutions that support dynamic data cleaning are needed. An increasing number of data analysis tools, such as Watson Analytics, provide flexible data browsing and querying abilities. In order to ensure reliable, trusted and relevant data analysis, dynamic data cleaning solutions are required. In particular, current data quality tools fail to adapt to: (1) fast changing data and data quality rules (for example as new datasets are integrated); (2) new data governance rules that may be imposed for a particular industry; and (3) utilize industry specific terminology and concepts that can refine data quality recommendations for greater accuracy and relevance. In this project, we will develop a system for dynamic data cleaning that adapts to changing data and rules, and considers industry specific models for improved data quality.
Industry Partner(s): IBM Canada Ltd.
PI & Academic Institution: Fei Chiang, McMaster University
# of HQPs: 3
Platform: Cloud
Focus Areas/Industry Sector: Cybersecurity, Digital Media
Technology: Image/Video Processing, Real-Time Analytics


Active learning for automatic generation of narratives from numeric financial and supply chain data
Large enterprises compile and analyze large amounts of data on a daily basis. Typically, the collected raw data is processed by financial analysts to produce reports. Executive personnel use these reports to oversee the operations and make decisions based on the data. Some of the processing performed by can be easily automated by currently available computational tools. These tasks mostly make use of standard transformations on the raw data including visualizations and aggregate summaries. On the other hand, automating some of the manual processing requires more involved AI techniques. In our project, we aim to solve one of these harder to automate tasks. In fact, analyzing textual data using NLP is one of the standardized methods of data processing in modern software tools. However, vast majority of NLP methods primarily aim to analyze textual data, rather than generate meaningful narratives. Since generation of text is a domain dependent and non-trivial task, automated generation of narratives requires novel research to be useful to an enterprise environment. In this project, we focus on using numerical financial and supply chain data to generate useful textual reports that can be used in executive level companies. Upon successful completion of this project, financial analysts will spend less time on repetitive tasks and have more time to focus on reporting tasks requiring higher level data fusion skills.
Industry Partner(s): Unilever Canada
PI & Academic Institution: John Maidens, Ryerson University
Co-PI Names: Ayse Bener
# of HQPs: 3
Focus Areas/Industry Sector: Advanced Manufacturing, Digital Media
Technology: Artificial Intelligence, Real-Time Analytics



Advancing the CANWET watershed model and decision support system by utilizing high performance parallel computing functionality
Watershed modeling is widely used to better understand processes and help inform planning and watershed management decisions. Examples include identifying impacts associated with land use change; investigating outcomes of infrastructure development, predicting effects of climate change. The proposed project will see the evolution of a desktop based watershed modeling and decision support system to a web based tool that will allow greater access by decision makers and stakeholders. By this means we will advance the idea of evaluating cumulative effects in the watershed decision making process rather than the current practice of assessing proposed changes in isolation.
The proposed software evolution will take advantage of high performance computing by porting existing code to a higher performing language and restructuring to operate using parallel or multi-core processing. The result is expected to be a dramatic reduction in simulation run times. Reduced run times will facilitate the use of automatic calibration routines used to conduct model setup, reducing costs. It will also enable rapid response if the simulation were to be re-run by a request through the web-based user interface. The designed web-based tool will be used by decision and policy makers in the watersheds that contribute to Lake Erie to understand the sources of pollution especially phosphorus which is a major contributor to Lake Erie eutrophication problems and develop policies in supporting a wide variety of watershed planning and ultimately help achieve the Federal and Ontario government commitments to reduce 40% phosphorus entering Lake Erie by 2025.
Industry Partner(s): Greenland International Consulting
PI & Academic Institution: Prasad Daggupati, University of Guelph
# of HQPs: 1
Platform: Cloud
Focus Areas/Industry Sector: Cities, Digital Media, Water
Technology: Modelling and Simulation

Advancing video categorization
Vubble is a media tech company that builds solutions for trustworthy digital video distribution and curation. Using a combination of algorithms and human curators, Vubble searches the internet to locate video content of interest to its users. Vubble is collaborating with Dr. Vida Movahedi from Seneca’s School of Information and Communication Technology to develop a machine-learning algorithm that will automatically output highly probable categories for videos. With this algorithm implemented into the Vubble workflow to assist in automated video identification, Vubble will be able to better address their existing, and emerging, customer demands, while increasing their productivity and competitiveness. This video identification research project will be Vubble’s first step in understanding how to automate the identification of accurate video. The need for automation of videos curation is prevalent, as video is quickly becoming the world’s dominant form of media consumption, particularly for digital native younger audiences. Furthermore, the results of the applied research will aid Vubble in moving forward in addressing what they believe is a looming problem facing all media consumers, and society, the rising of fake news video created from archival footage.
Industry Partner(s): Vubble
PI & Academic Institution: Vida Movahedi, Seneca College
# of HQPs: 3
Platform: Cloud
Focus Areas/Industry Sector: Digital Media
Technology: Artificial Intelligence

Agile real time radio signal processing
Canadian VLBI capability has been missing for a decade. Jointly with Thoth Technology Inc we propose to restore domestic and international VLBI infrastructure that will be commercialized by Thoth Technology Inc. This project will implement and optimize multi-telescope correlation and analysis software on the SOSCIP BGQ, Agile and LMS platforms. The resulting pipeline package will allow commercial turnkey VLBI delivery by Thoth Technology Inc to domestic and international customers into a market of about $10 million/year
Industry Partner(s): Thoth Technology
PI & Academic Institution: Ue-Li Pen, University of Toronto
# of HQPs: 5
Focus Areas/Industry Sector: Digital Media
Technology: Modelling and Simulation, Real-Time Analytics


An economics-aware autonomic management system for big data applications
Recent advancements in software technology, including virtualization, microservices, and cloud computing, have created novel challenges and opportunities on developing and delivering software. Additionally, it has given rise to DevOps, a hybrid team responsible for both developing and managing the software system, and has led to the development of tools that take advantage of the enhanced flexibility and enable the automation of the software management cycle. In this new world characterized by volatility and speed, the Business Operations (BizOps) team is lagging behind and still remains disconnected from the DevOps team. BizOps views software as a product and is responsible for defining the business and economic strategy around it.
The goal of the proposed project is to imbue DevOps tools and processes with BizOps knowledge and metrics through formal models and methods. Currently, BizOps receives the software system or service as a finished product, a black box, on which a price has to be put and be offered to clients. The price and the marketing strategy are usually defined at the beginning of a sales cycle (e.g. a year) and remain the same for the entirety of the cycle. However, this is in contrast to the great volatility of the service itself. In most cases, the strategies are based on the instinct of managers with high acumen and experience and broad marketing surveys or one-to-one negotiations with clients, information that can easily change and may remain disconnected from the software development. The end product of this project is a set of economic and performance models to connect the DevOps and BizOps processes during the software’s life cycle and eventually incorporate them in automated tools to adapt and scale the system in production and enable continuous development, integration and delivery.
Industry Partner(s): IBM Canada
PI & Academic Institution: Marin Litoiu, York University
# of HQPs: 5
Platform: Cloud
Focus Areas/Industry Sector: Cities, Digital Media
Technology: Artificial Intelligence, Real-Time Analytics, Sensors


Industry Partner(s): Osisko Mining Corporation
PI & Academic Institution: Neil Banerjee, Western University
Co-PI Names: Leonardo Feltrin
# of HQPs: 1
Platform: Cloud
Focus Areas/Industry Sector: Digital Media, Mining
Technology: Sensors



Big data analysis and optimization of rural and community broadband wireless networks
Rural broadband initiative is happening in a big wave across the world. Canada, being a diverse country has a specific Internet reachability problem due to population being sparse. It is economically not viable to bring fiber to each and every house in Canada. It is not economically viable to connect every household through satellites either. Broadband Internet over wireless networks is a good option where Internet is brought over fiber to a point of presence and moved to houses over wireless.
EION is actively working in Ontario and Newfoundland to make rural broadband a possibility. Wireless networking in rural areas in Canada is a challenge in itself due to weather, terrain and accessibility. Real-time constraints such as weather, water and foliage do alter the maximum capacity of the wireless pipe. In addition the usage pattern of the houses, especially real-time video that require fast response time, require adequate planning.
This is becoming very critical as almost 80% of the traffic seems to be video related due to popularity of applications such as Netflix, Youtube and Shomi. Intelligence in wireless rural broadband networks are a necessity to bring good quality voice, video and data reliably. Optimization in system and network level using heuristics and artificial intelligence techniques based on big data analysis of video packets is paramount to enable smooth performing broadband rural networks.
In this project, we will be analyzing the big data of video packets in rural broadband networks in Ontario and Newfoundland and design optimized network design and architecture to bring reliable video services over constrained rural broadband wireless networks.
Industry Partner(s): EION Inc.
PI & Academic Institution: Amiya Nayak, University of Ottawa
Co-PI Names: Octavia Dobre
# of HQPs: 3
Platform: Cloud
Focus Areas/Industry Sector: Cities, Digital Media, Energy
Technology: Artificial Intelligence, Real-Time Analytics




Computational support for big data analytics, information extraction and visualization
The Centre for Innovation in Visualization and Data Driven Design (CIVDDD), an Ontario ORF-RE project performs research for which SOSCIP resources are needed and they were awarded NSERC CRD funding with IBM Platform [Applications of IBM Platform Computing solutions for solving Data Analytics and 3D Scalable Video Cloud Transcoder Problems] beginning in July 2015. This project involves Big Data, Visualization and Transcoding and will train many HQP. We require access to equipment capable of running a multi-core cluster using IBM Symphony and Big Insights software with IBM Platform on data analytics, visualization and transcoding. Our objectives include:
IBM Platform:
- Test the applicability of Platform Symphony to Data Analytics problems to produce demonstrations of Symphony on application domains (we started by exploring streaming traffic analysis datasets) and identify improvements to Symphony to gain IBM advantage in the marketplace.
- Design and implement methods to greatly speed-up the search for high utility frequent itemsets in big data using Symphony in a parallel distributed environment.
- Design algorithms to determine which are suitable in such an environment.
- Identify commercialization venues in application domains.
- Exploration of a Scalable Video Cloud Transcoder for Wireless Multicasts
Industry Partner(s): IBM Spectrum Computing
PI & Academic Institution: Aijun An, York University
Co-PI Names: Amir Asif
# of HQPs: 4
Platform: Cloud
Focus Areas/Industry Sector: Cities, Digital Media, Energy, Water
Technology: Artificial Intelligence, Image/Video Processing



Detailed computational fluid dynamics modeling of UV-AOPs photoreactors for micropollutants oxidation in water and wastewater
Micropollutants such as bisphenol-A and N-nitrosodimethylamine pose a significant threat to aquatic life, animals, and humans beings due to their persistent and potentially carcinogenic nature. While most conventional water treatment methods cannot remove these contaminants, ultraviolet-driven (UV) advanced oxidation processes (AOPs) are effective in degrading micropollutants. As UV-AOPs require electrical energy to enable the treatment, energy costs present a barrier to the widespread adoption of this technology. In this project, we focus on the optimization of UV-AOPs-based reactors to enhance their degradation performance while reducing their energy consumption. In this respect, we will develop a detailed numerical model that integrates hydraulics, optics and chemistry to investigate UV-AOP photoreactors in a comprehensive manner.
The resulting information will then be utilized to design the next-generation of UV-AOP photoreactors commercialized by Trojan Technologies. The design space will be explored by high-performance computer simulations of full-scale photoreactors rather than simplified or scaled-down models. This will be accomplished by leveraging opensource software, artificial-intelligence optimization techniques and the second-to-none parallel-computing capabilities offered by Blue Gene/Q. Once the optimization of UVAOPs-based reactors is complete, the advanced modeling results generated using Blue Gene/Q will be utilized in the development of a simplified model for sizing purposes. This will be accomplished through combined use of metamodeling techniques and cloud computing. In brief, the concept is to simplify the detailed model developed earlier so that it can be simulated using hand-held mobile devices, which will allow the company’s sales personnel to market the optimized reactors. Consequently, it will allow the company to increase its competitiveness on global scale as well as to increase the rate of adoption of advanced water treatment technologies by water utilities and end-users.
Industry Partner(s): Trojan Technologies
PI & Academic Institution: Anthony G. Straatman, Western University
Focus Areas/Industry Sector: Advanced Manufacturing, Digital Media, Water
Technology: Computational Fluid Dynamics


Developing real-time hyper-resolution simulation capability for the HydroGeoSphere (HGS) integrated groundwater – surface water modelling platform
Climate change will greatly impact the availability and quality of Earth’s water resources over the next century. The expected increase in mean temperature will have a severe impact on the water cycle, not only through changing precipitation patterns and amounts, but also through an increase in the severity and frequency of extreme events. Already, changing rainfall patterns and shifting temperatures are increasing the complexity of water management in the Grand River watershed and affecting the programs and operations of the Grand River Conservation Authority (GRCA).
Rigorous science-based forecasts to address how the surface and subsurface (groundwater) resources might be impacted by climate change will therefore necessarily demand the use of a computational platform that fully integrates the climate system with the surface/subsurface hydrological system in three dimensions. By establishing proactive science-based management policies now, such as water use quotas, limits on fertilizer and pesticide use, water treatment guidelines, flood control practices, etc., the future sustainability of the water resources can be protected, and perhaps even enhanced. The high-resolution regional climate simulations being performed by Prof. W.R. Peltier’s group will provide the data to drive our 3D integrated surface/subsurface hydrological model. We are also coordinated with the smart data collection activities being undertaken in the Southern Ontario Water Consortium (SOWC), of which IBM is a major partner.
Industry Partner(s): Aquanty Inc.
PI & Academic Institution: Ed Sudicky, University of Waterloo
Co-PI Names: David Lapen
# of HQPs: 2
Platform: Cloud
Focus Areas/Industry Sector: Digital Media, Water
Technology: Modelling and Simulation


Development of cardiac specific machine learning infrastructure
Analytics for Life, Inc. (A4L) is an early stage medical device company that specializes in the development of technologies to analyze patient physiological signals in order to evaluate cardiac performance, status and risk. A4L’s core competencies include identifying and developing mathematical features from physiological signals and assembling these features into clinically informative formulae using machine learning techniques. A4L has used third party machine learning tools (open source and licensed products) for the formula generation aspect of the product development cycle.
Specifically, A4L has used these tools to demonstrate the feasibility of computing left ventricular ejection fraction, cardiac ischemic burden and other cardiac performance/status parameters for simple to collect, non-invasive physiological signals (surface voltage gradients, SPO2, Impedance etc.). As a result of this experience, A4L have learned the benefits and insufficiencies of these tools for A4L’s specific purposes. A4L plans to file with the U.S. Food and Drug Administration (FDA) an application for approval of a physiological signal collection device and will soon afterwards be seeking market clearance for products assessing cardiac health emanating from the machine learning process. A4L believes it can build a machine learning tool specifically tailored for cardiac evaluation based on experience with the tools used to date.
This A4L-specific machine learning paradigm will search only relevant mathematical spaces, cutting down on time and CPU power needed to iterate to solutions and will allow for an assessment of a much wider array of potential solutions. Furthermore, this A4L-specific machine learning paradigm will provide a controlled and validated system that can be audited and evaluated by regulatory bodies, something that is not possible with the current machine learning tool(s). A4L proposes a hybridization of paradigms within a set mathematical space. This will create efficiency in the search, and therefore more searches can be performed in the same period of time. This will lead to more solutions being available for evaluation, resulting in more accurate and efficiently produced end solutions. If successful, this new paradigm will allow for simple, non-invasive, rapid and relatively inexpensive cardiac diagnostic capabilities, bringing tertiary care diagnostics to primary care settings and disrupting the current infrastructure and capital cost-centric model of diagnostic delivery.
Industry Partner(s): IBM Canada Ltd. , Analytics 4 Life
# of HQPs: 6
Focus Areas/Industry Sector: Digital Media, Health
Technology: Artificial Intelligence

Distributed and scalable search in enterprise databases
Google search, and other search engines such as Bing and Yahoo!, provide a convenient way to find Webpages that contain various keywords or are related to particular topics. For the purposes of searching, Webpages are essentially loosely structured paragraphs of text. However, much of the world’s high-quality enterprise data are structured into well defined tables containing sets of well-defined columns.
One consequence of structured database design is that information about a single entity may be scattered across many columns in many tables, and must be stitched together in a meaningful way when answering user queries. This turns out to be significantly more difficult than finding Webpages or text documents containing various keywords.
As Dr. Surajit Chadhuri (a Distinguished Scientist at Microsoft Research) recently argued in a keynote talk at the IEEE Data Engineering conference, search over structured databases has fallen behind search over unstructured data. In the proposed research, we will develop a powerful and intuitive search system, akin to Web keyword search, for structured enterprise data. Our system will empower nontechnical users to explore enterprise databases and turn big data into actionable insight, just as Google search has empowered society to explore the Web.
Industry Partner(s): IBM Canada Ltd.
PI & Academic Institution: Lukasz Golab, University of Waterloo
Co-PI Names: Mehdi Kargar
Focus Areas/Industry Sector: Digital Media
Technology: Artificial Intelligence, Internet of Things, Real-Time Analytics

Distributed Deep Learning and Graph Analytics Using IBM Spectrum Computing Solutions
Deep learning is a popular machine learning technique and has been applied to many real-world problems, ranging from computer vision to natural language processing. In most cases deep learning outperformed previous work. However, training a deep neural network is very time-consuming, especially on big data. A popular solution is to distribute and parallel the training process across multiple machines. Indeed, the race is on to parallelize deep learning! Industry and academic research teams around the world are trying to make deep neural networks train as fast as possible on farms of GPU capable servers. We are working with our IBM partners to help develop advanced scheduling and messaging techniques for distributed deep learning. In addition, we will develop two real-world applications of distributed deep learning to demonstrate the efficiency and effectiveness of distributed deep learning. In one application, we address the video surveillance problem of tracking a moving target over a network of video cameras with partial or no overlaps in their coverage. We will use a deep learning approach to identify multiple pedestrians in each video frame, and a particle filter to track moving pedestrians. In the second application, we address the problem of fraud/intrusion detection. We will use graph-based detection that considers relationships between objects or individuals. Graph-based approaches are powerful because they do not operate on objects or individuals in isolation, but also consider their network information. We will emphasize on graph-based fraud detection methods that have a number of applications and potentially large impacts.
Industry Partner(s): IBM Canada Ltd.
PI & Academic Institution: Aijun An, York University
Co-PI Names: Amir Asif
# of HQPs: 3
Focus Areas/Industry Sector: Digital Media
Technology: Artificial Intelligence, Image/Video Processing


Efficient deep learning for real-time traffic event detection
Miovision is interested in designing the first affordable, low-power, energy efficient real time traffic event detection system that can be installed without the need to be powered by the grid nor the need to be connected directly to city installed infrastructure. Deep learning for traffic event detection can provide overwhelmingly superior accuracy and addresses most of the real-world scenarios that make competing detectors unsuitable for customer adoption. The challenge with deep learning is its complexity, which is currently infeasible for a self-powered real-world embedded detection system. Working with Dr. Alexander Wong and the Vision and Image Processing Lab at the University of Waterloo, the goal of this project is to develop technologies that can significantly reduce the complexity of deep learning for traffic event detection, while maintaining its accuracy and market fit, so that it can be deployed on a low-cost and low-powered hardware platform.
Industry Partner(s): Miovision
PI & Academic Institution: Alex Wong, University of Waterloo
# of HQPs: 3
Focus Areas/Industry Sector: Cities, Digital Media
Technology: Artificial Intelligence, Image/Video Processing


Generalized heterogeneous radio signal processing
A new generation of radio telescopes is opening new windows on the Universe, allowing astronomers to observe the cosmos in unprecedented ways. Powered by the ongoing revolution in computing, these new telescopes operate at the cutting edge of digital technologies. New algorithms are being developed at a spectacular pace, and we are forging a new partnership with the Markham branch of Advanced Micro Devices (AMD) to add to these, developing new tools and opening new possibilities in radio astronomy, software defined radio, and similar telecommunications technologies. High-cadence mitigation of Radio Frequency Interference (RFI), advanced Digital Beamforming techniques, and Dynamic Spectral reshaping will all be developed and ported to an open software framework, allowing it to be used on a wide variety of computational and signal processing hardware.
Industry Partner(s): Advanced Micro Devices Inc.
PI & Academic Institution: Keith Vanderlinde, University of Toronto
Platform: Agile
Focus Areas/Industry Sector: Aerospace & Defence, Digital Media
Technology: Artificial Intelligence, Modelling and Simulation



High fidelity simulations and low-order aero-acoustic modeling of engine test cells
The testing and certification of gas turbines demand the well-controlled environment provided by an engine test cell. The resonant acoustic coupling between the flow generated noise from the gas turbine exhaust and the engine test cell impacts the quality and reliability of the engine testing and certification. Predictive modeling of flow generated noise using high-fidelity numerical simulations is central to an a priori acoustic assessment and for the development of noise-mitigating designs. As part of this effort, the Multi-Physics Interaction Lab and University of Waterloo will numerically study, with the help of high fidelity, large-eddy simulations of the SOSCIP high-performance computers, the acoustic noise generation in partially confined jets undergoing a re-acceleration through the test cell ejector system. As a direct outcome, the researchers will develop a low order Aero-acoustic model that will be used by our industrial partner to predict resonant acoustic models to within +/- 20% the frequency and amplitude of the coupling phenomena. This OCE and NSERC-funding project will permit the training and mentoring of four HQP for careers in science and technology within Canada.
Industry Partner(s): MDS Aero Support Corp.
PI & Academic Institution: Jean Pierre Hickey, University of Waterloo ,
# of HQPs: 3
Focus Areas/Industry Sector: Advanced Manufacturing, Digital Media, Energy
Technology: Computational Fluid Dynamics



HPC cloud analytics / machine learning support for Watson Pepper clinical study
Skin cancer is the most common type of cancer. 80,000 cases of cancer are diagnosed in Canada every year. 5000 of these cases are melanoma, the deadliest form of cancer (Canadian Skin Cancer Foundation). Current prevention efforts to reduce skin cancer focus on educating individuals on preventative actions that they can take to reduce the risk of this cancer. However, research has shown that both communication failure and information overload are significant problems affecting the quality of patient centered care. Social robotics and artificial intelligence have been used effectively to communicate and positively influence behavior, thus this research proposes to develop and test these combined technologies as an intervention for skin cancer prevention education.
The research team and collaborating research partner IBM will integrate IBM Watson cognitive computing applications with Softbank Robotics advanced robotics platform, the Pepper robot. The Watson Pepper prototype will be used as a controlled variable in a randomized controlled clinical trial (N = 200) to assess the efficacy of socially assistive robotics intervention for behavioural change in skin cancer prevention knowledge and practices among medical patients, the first clinically tested implementation of a Watson Pepper robot for healthcare communication. The research proposes commercialization and business implementation of the integrated IBM Watson robot in an expanded scale and scope of healthcare communication applications. To support the achievement of this innovative technology milestone, SOSCIP will provide the critical cloud data analytics and memory capacity to support the analysis and modeling of the large multivariate data sets associated with this project.
Industry Partner(s): IBM Canada Ltd.
PI & Academic Institution: David Harris Smith, McMaster University
Co-PI Names: Hermenio Lima, Frauke Zeller
# of HQPs: 3
Platform: Cloud, IBM Watson, LMS
Focus Areas/Industry Sector: Cybersecurity, Digital Media, Health
Technology: Artificial Intelligence, Robotics