IT Coe’s

The Department always uphold a research culture, in parallel with the overall teaching-learning process. The outcome is the papers published by the students and faculty in National and International conferences and journals. Some of the research areas of focus are:

  • Virtusa CoE on AWS
  • EPAM CoE on Java
  • Internet of Things (IoT)
  • Big Data


The term ETL which stands for extract, transform, and load is a three-stage process in database usage and data warehousing. It enables integration and analysis of the data stored in different databases and heterogeneous formats. After it is collected from multiple sources (extraction), the data is reformatted and cleansed for operational needs (transformation). Finally, it is loaded into a target database, data warehouse or a data mart to be analyzed. In most of the Data Integration or Data Warehousing projects, the amount of time spent in enforcing business data domain rules and/or business data integrity rules could be as high as 80 percent of the total time. And enforcement of such rules normally happens through Data Transformations.

Informatica has several products focused on data integration. It has become so popular that Informatica PowerCenter has now become synonymous to Informatica. It offers products for ETL, data masking, data Quality, data replica, data virtualization, master data management, etc. Informatica Power center ETL/Data Integration tool is a most widely used tool and in the common term when we say Informatica, it refers to the Informatica Power Center tool for ETL. Informatica Power center is used for Data integration.

COE – Big Data

This course aims to train the students in Big Data – Hadoop. The goal is to accelerate the technology Big Data – Hadoop is growing across the world and this strong growth pattern translates into great opportunity for all the IT Professionals.This course builds you to become passionate about building successful career in Big Data- Hadoop. BIG DATA helps to apply practical skills and analytical knowledge to real time issues. Big Data refers to a huge volume of data, which is a collection of large datasets that cannot be processed using traditional computing techniques. Hadoop is an open-source framework that allows to store and process big data in a distributed environment across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.