Research Grants
Main Research Projects
Prototype of a HUB Floating-Point Unit in a RISC-V Platform
Funding Agency; Identifier: National Government (MICINN, AEI); PDC2023-145800-I00
Period: January 1, 2024 to December 31, 2025
Principal Investigators: Oscar Plata and Emilio L. Zapata
Participant Entities: University of Malaga
Funding Amount: 257,367 €
[click for more info]
[click for less info]
Several research and industrial initiatives across Europe have been exploring the potential of RISC-V architecture in various applications, spanning from embedded systems to high-performance computing. However, it's worth noting that while several RISC-V cores are available, many do not typically implement floating-point (FP) arithmetic units. In cases where FP support is required, the common choice is to utilize the single available core known as FPnew. FPnew is a parametric FP unit with support for standard RISC-V formats and operations. It is highly configurable but also complex to set up and may imply high implementation costs.
The availability of FP units for the RISC-V architecture is extremely limited. The sole option currently available implements the IEEE 754 standard, making it relatively costly. Nevertheless, it remains the choice for low-power embedded systems that require FP capabilities simply because it is the only viable solution. This situation is largely attributed to the challenges and expenses associated with designing a FP unit from scratch, particularly when lacking prior experience in this domain. Thus, our research aims to fill this technological gap and provide a more accessible and efficient solution for integrating FP units into the RISC-V architecture.
Previously, we have proposed the HUB (Half-Unit Biased) format that stands out for its ability to simplify arithmetic units at the logic level, offering profound advantages in hardware implementations. The HUB approach concurrently reduces area resources, power consumption, and computation time. Additionally, it has been empirically demonstrated that HUB formats are exceptionally well-suited for general FP applications and application-specific fixed-point data paths employing real numbers.
This project aims to fill the described technological gap and will provide a more accessible and efficient solution for integrating FP units into the RISC-V architecture using the HUB format. The advantages of our proposal are multifaceted and firmly rooted in both technical and strategic considerations. First and foremost, the adoption of the HUB numerical representation format in our project offers a compelling advantage: the ability to significantly reduce the processor's area and power consumption, all while maintaining precision in calculations. This optimization addresses a critical concern in modern computing, where energy efficiency and compact design are paramount, particularly in portable devices and embedded systems. By achieving this reduction in resource utilization without sacrificing computational accuracy, our project aligns with the current industry trends and European initiatives aimed at sustainable and efficient computing solutions.
MALAGA MICROELECTRONICS
Funding Agency; Identifier: National Government (PRTR, NextGenerationEU); TSI-069100-2023-0013
Period: July 1, 2023 to June 30, 2026 (to June 30, 2027 with company funds)
Principal Coordinator: Enrique Marquez
Principal Coordinator (Photonic Area): Iñigo Molina
Principal Coordinators (Digital Area): Angeles Navarro and Oscar Plata
Principal Coordinator (Analogical Area): Enrique Marquez
Participant Entities: University of Malaga
Funding Amount: 5,500,000 €
[click for more info]
[click for less info]
Photonic Area: Activities related to (1) training in Silicon Photonics where highly specialized short courses in integrated photonics will be offered with internationally renowned lecturers and scholarships for promising students; (2) two high impact research projects in integrated photonics, specifically in integrated photonic sensors and in optical systems for wireless communications.
Digital Area: Activities related to (1) training of PhD researchers and dissemination of results in scientific forums; (2) high impact research focused on two main areas: advanced microprocessors and alternative architectures. In particular: technologies to optimize performance, energy efficiency and hardware costs in heterogeneous embedded systems, including accelerators and reconfigurable architectures; design of intelligent applications in various domains adapted to heterogeneous embedded platforms; integration of techniques and technologies from different areas, covering hardware, software and simulation tools.
Analogical Area: Activities related to (1) supporting communication systems based on baseband processing; (2) developing integrated transceivers for next-generation communications applications; (3) collaborating with local and regional companies in quantum system design projects based on RF design technologies.
In summary, the project is focused on advanced research and training in three key areas of microelectronics, promoting collaboration with industry and the internationalization of academic and research activities.
APP_DIA: Advanced Architectures and Programming for Data Intensive Applications
Funding Agency; Identifier: National Government (MICINN, AEI); PID2022-136575OB-I00
Period: September 1, 2023 to August 31, 2026
Principal Investigators: Oscar Plata and Emilio L. Zapata
Participant Entities: University of Malaga
Funding Amount: 401,250 €
[click for more info]
[click for less info]
With the end of Dennards scaling and the threat that even Moores law is about to end, computing is facing challenging moments. On on side, computer researchers are aggressively pursuing and exploring alternative forms of computing (i.e., quantum or neuromorphic computing). However, as these technologies mature, computer systems are resorting to architectural specialization (i.e., domain-specific accelerators) for continuous performance-energy scaling. This is especially the case for HPC data centers, as we are quickly entering an era of extreme heterogeneity, characterized by cluster nodes integrating a multitude of cooperating accelerators. This situation is exacerbated by the fact that software is evolving rapidly with emerging applications (e.g., machine learning, bioinformatics, data analytics) and scientific progress, so as the hardware that cannot adapt to software will suffer from a short lifecycle and high engineering costs. Overall, computing power, energy efficiency and flexibility have become the main criteria for computer architecture.
Today most of data processing and analysis takes place in centralized computing facilities, that is, data centers with or without HPC capabilities. The rest of processing occurs in (smart) distributed connected devices (IoT systems). These devices are evolving very fast due to the accelerated growth of ultra-low-power sensor electronics, low-power circuits, and wireless communications, coupled with their integration on emerging systems on chip (SoC) for multimodal monitoring. These systems greatly expand the availability of data, imposing strong requirements on processing systems for timely analysis. The increasing scales in data generation have resulted in the proliferation of time-critical, data-driven applications and workflows. These applications need a seamless integration of storage and processing resources along the path between the (cloud) HPC/data center and the IoT devices, giving rise to the edge and fog computing paradigms. This natural evolution of the cloud computing model is known as computing (or compute) continuum, that is, the dynamic, smart, and fluid coupling of IoT, edge, fog, and cloud resources into a single computing system.
This project fits into the broad context described above. We plan to provide architectural and processing solutions at different systems of the computing continuum for data-driven and data-intensive applications, with the objective of improving computing performance, energy efficiency and/or processing flexibility for such workloads. The research team is made up of groups that work in the field of computer architectures and accelerators, and efficient programming models and techniques targeting data-intensive and data-driven applications.
CooTSIoT: HW-SW Co-Design and Optimization of Time Series based Applications for IoT Ultra-Low Power Emdedded Devices
Funding Agency; Identifier: National Government (MCIN); TED2021-131527B-I00
Period: December 1, 2022 to November 30, 2024
Principal Investigator: Angeles Navarro and Rafael Asenjo
Participant Entities: University of Malaga
Funding Amount: 233,680 €
WWW: CooTSIoT
iDA: Immortal Database Access - Long-Term Recovery and Access to Decommissioned Database Systems
Funding Agency; Identifier: European Union (Eureka Eurostars-3); E!1622
Period: October 1, 2022 to December 31, 2024
Coordinator: Rune Bjerkestrand (Piql, Drammen, Norway)
Principal Investigator (UMA): Oscar Plata
Participant Entities: Piql (Norway), Tedial (Spain), University of Malaga, Norwegian Computing Center
Funding Amount: 1,756,175 €
[click for more info]
[click for less info]
The objective of the project is the research and development of tools and processes to database decommission that preserve the content permanently on a physical medium and online, and that include tools for database regeneration together with its search functionalities. The tools and processes guarantee immunity to technical obsolescence, protect access and search functionalities for future users, prevent data corruption, avoid migration, exclude hackers, lower the CO2 impact from cloud storage, ensure user access in the long term and serve as the ‘ultimate backup of last resort’ for future generations.
iDA will produce the following tools as tangible results:
(1) A methodology that guides database owners in the capture of key functionalities and use cases of a database and will be preserved alongside the data when the database is decommissioned using SIARD. The use cases will be specified in a new specification language (DBSpec).
(2) A read-only access engine (ROAE) software library that realises the DBSpec queries that were identified in the decommissioning process.
(3) An application programming interface (API) that maps the query specifications and supports database operations like load, list queries and execute queries.
(4) An access query engine with a compiler, executor and formatter modules that can convert queries into the correct syntax, execute the search in the SIARD database, and format the search results according to the query request output.
(5) A Self-Extracting Platform (SEP) that contains a simplified execution software machine that can run the access query engine in the database, a SIARD decoder, an interface and query language application, a human-readable bootstrap and a set of instructions needed to implement the platform that future users will need to restore the database and its functionalities.
(6) A Self-Extracting Archival Information Package (SE-AIP) specification that extends the existing AIP with application execution capabilities. The SEP will be a reference execution platform, but the design will be agnostic in terms of execution platform.
MArEA: Programming Models for Analytics Applications in Emerging Architectures
Funding Agency; Identifier: Local Government (Junta de Andalucia); P20-00395-R
Period: October 5, 2021 to March 31, 2023
Principal Investigator: Angeles Navarro
Participant Entities: University of Malaga
Funding Amount: 74,700 €
WWW: MArEA
efhpDIC: Energy-Efficient High-Performance Data-Intensive Computing
Funding Agency; Identifier: National Government (MINECO); PID2019-105396RB-I00
Period: June 1, 2020 to May 31, 2023
Principal Investigators: Oscar Plata and Emilio L. Zapata
Participant Entities: University of Malaga
Funding Amount: 276,243 €
[click for more info]
[click for less info]
HPC (High Performance Computing) is one of the most fundamental infrastructures for scientific and engineering development in all disciplines, and has progressed enormously due to the increasing need to solve complex problems. Traditional HPC systems were mainly designed for compute-intensive applications. In recent years there has been an exponential growth in data availability (big data), so as the access to such a huge amount of diverse data has created the opportunity to extract useful information, or make new discoveries, through their analysis (data analytics). As a result, there has been a great development of large-scale data processing applications, characterized by being memory intensive.
Recently, the meaning of efficiency in HPC systems has evolved, from a concept purely related to performance (classic supercomputing) to another one where performance and energy consumption are combined, that is, the goal is to optimize the performance/energy ratio. With the end of Moore’s Law and Dennard Scaling, continued performance/energy scaling started to come primarily from specialization, resulting in heterogeneous processor designs.
This project is located in the broad context of data-intensive computing on modern HPC systems based on heterogeneous architectures. We plan to provide solutions at different levels, architecture, system and application, to perform efficiently (regarding performance and energy) data-intensive processing. These solutions cover three main lines of research: architectures, programming models and techniques, and applications.
Design of Memory-Centric Architectures for Big-Data Applications
Funding Agency; Identifier: Local Government (Junta de Andalucia); P18-FR-3433
Period: January 1, 2020 to December 31, 2022
Principal Investigator: Oscar Plata
Participant Entities: University of Malaga
Funding Amount: 79,800 €
ADAHE: Accelerating Data Analytics on Energy-Efficient Heterogeneous Architectures
Funding Agency; Identifier: Local Government (Junta de Andalucia); UMA18-FEDERJA-108
Period: November 15, 2019 to June 9, 2021
Principal Investigators: Angeles Navarro and Rafael Asenjo
Participant Entities: University of Malaga
Funding Amount: 70,481.30 €
WWW: ADAHE
Acceleration of Data-Intensive Applications in Architectures with 3D-Stacked Memories
Funding Agency; Identifier: Local Government (Junta de Andalucia); UMA18-FEDERJA-197
Period: November 15, 2019 to November 14, 2021
Principal Investigator: Oscar Plata
Participant Entities: University of Malaga
Funding Amount: 77,608.73 €
HPA-DIA: High Performance Architectures for Data Intensive Applications
Funding Agency; Identifier: National Government (MINECO); TIN2016-80920-R
Period: December 30, 2016 to December 29, 2019 (extended to December 31, 2020)
Principal Investigators: Emilio L. Zapata and Oscar Plata
Participant Entities: University of Malaga
Funding Amount: 405,471 €
[click for more info]
[click for less info]
The design of an efficient HPC system for data-intensive computing raises two fundamental problems. First, the processing of massive data in a reasonable time, and second, the fast movement of such data from memory, where it is stored, to the execution units. Regarding the first problem, it has been observed that an effective way of computing data-intensive applications is the combined use of hardware accelerators (GPU, FPGA...) and multicore processors. These accelerators are designed to be very efficient for a special set of operations (or computing patterns), but, usually, at the cost of increasing the complexity of programming.
The main objective of this project is the design of hardware/software technologies to improve the efficiency (performance-energy ratio) of modern HPC architectures when executing data-intensive applications. Although we are interested in any kind of data-intensive applications, we put special focus on a class of applications with low locality and low arithmetic intensity. These applications, which typically exhibit large potential parallelism, however hardly scale in current HPC architectures due to the saturation of the memory channels.
To achieve the main project goal, we work in a coordinated way on two main lines of research:
(1) High performance models and architectures: Design of programming models, compilation and runtime systems, and hardware support to optimize the execution of data-intensive applications on heterogeneous architectures. That includes accelerating compute-intensive code sections by specific hardware using advanced arithmetics.
(2) Data-intensive applications: A selection of application domains where a large amount of data is processed. That includes applications in the fields of automatic analysis of digital image and video, geophysics, machine learning, bioinformatics and biomedicine.
iVM: Immortal Virtual Machine - Solving the Problem of File Format and Infrastructure Obsolescence
Funding Agency; Identifier: European Union (Eureka Eurostars-2); E!12494
Period: December 1, 2018 to March 31, 2021
Coordinator: Rune Bjerkestrand (Piql, Drammen, Norway)
Principal Investigator (UMA): Oscar Plata
Participant Entities: Piql (Norway), Tedial (Spain), University of Malaga, Norwegian Computing Center, Norwegian National Museum
Funding Amount: 2,085,200 €
[click for more info]
[click for less info]
The project is part of the general problem of archiving and preservation of digital information in the very long term (several centuries). The solution to this problem is of utmost importance in those sectors that have contents of great economic or cultural value and, therefore, want a reliable and secure preservation of their data for an indefinite period. The iVM focuses on the logical preservation of data, that is, the ability to extract and interpret in a very distant future data stored at present. This technology will be integrated into PPS (Piql Preservation System), based on high resolution micrographic film as a physical medium.
The current dominant approach to managing the long-term preservation of information is data migration. Migration implies periodic transformations (for example, every 3 or 5 years) of the archived data in new logical formats, since the original formats are becoming obsolete. Despite the existence of sophisticated methods for the detection and correction of errors, there will always be a possibility of altering the digital content, compromising its integrity and completeness (data corruption) every time the migration occurs. In addition, migration consumes time and resources, resulting in a very expensive process for the long term, especially for massive data.
An alternative to migration is emulation. Basically, this approach consists in the development of the necessary metadata to locate, access and regenerate the archived data, the technology to encapsulate the documents and their metadata and the software required to process and interpret said documents. Since this software must be able to be executed in the distant future, a key technology in this solution is the design of an abstract (virtual) machine and its emulator, independent on the current hardware technology. The software will be executed by said emulator. As a consequence, the preservation of long-term data is reduced to ensuring that an emulator simulating the operation of the virtual machine can be reconstructed in a distant future.
The iVM project focuses on the context of virtual emulation. In recent years some partial solutions have been developed in this line, with limited application in some public and private institutions. However, there is currently no comprehensive and integrated solution in the market based on virtual emulation for long-term digital preservation.
Technologies for Long-Term Archival of Digital Information
Funding Agency; Identifier: Local Government (Junta de Andalucia); P12 TIC-1470
Period: January 1, 2014 to February 16, 2019
Principal Investigator: Oscar Plata
Participant Entities: University of Malaga
Funding Amount: 120,394 €
Recognition of Video Events Through High Performance Architectures
Funding Agency; Identifier: Local Government (Junta de Andalucia); P12 TIC-1692
Period: January 1, 2014 to February 16, 2019
Principal Investigator: Nicolas Guil
Participant Entities: University of Malaga
Funding Amount: 154,054 €
GPU Acceleration of Genomic Data Processing and High Resolution Biomedical Images
Funding Agency; Identifier: Local Government (Junta de Andalucia); P12 TIC-1741
Period: January 1, 2014 to February 16, 2019
Principal Investigator: Manuel Ujaldon
Participant Entities: University of Malaga
Funding Amount: 85,010 €
Acceleration Techniques in Libraries and Parallel Languages for Many-Core and Heterogeneous Architectures
Funding Agency; Identifier: Local Government (Junta de Andalucia); P11 TIC-8144
Period: March 27, 2013 to March 26, 2017
Principal Investigator: Angeles Navarro
Participant Entities: University of Malaga
Funding Amount: 90.252 €
Computation of Visibility in Digital Models of High Precision Elevations on Advanced Architectures
Funding Agency; Identifier: Local Government (Junta de Andalucia); P11 TIC-8260
Period: March 27, 2013 to March 26, 2017
Principal Investigator: L. Felipe Romero
Participant Entities: University of Malaga
Funding Amount: 110.871,50 €
ACAM4: Architectures, Compilers and Applications in Multiprocessors
Funding Agency; Identifier: National Government (MINECO); TIN2013-42253-P
Period: January 1, 2014 to December 31, 2016
Principal Investigators: Emilio L. Zapata and Oscar Plata
Participant Entities: University of Malaga
Funding Amount: 207,878 €
[click for more info]
[click for less info]
Over recent decades the advancement of applications in science and engineering domains has been increasingly intertwined with the availability of HPC computational tools. So that the rapid development of high performance architectures, along with the necessary tools for their efficient programming, is becoming increasingly critical for the proper development of science and engineering. Today, HPC systems are capable of processing large amounts of data in a reasonably time, all thanks to the ability to run thousands of operations concurrently. To ensure that these systems continue to improve their performance and ease of use, solutions must be provided at various levels: architecture (including arithmetic, synchronization and storage), parallel programming models, compilers, runtime systems and applications. All this without forgetting that solutions must save power consumption or, at least, improve the performance-energy ratio.
The main goal of this project is the design of solutions in different areas to improve the efficiency and programming of high performance computing (HPC) systems considering application domains in various fields of science and engineering. These objectives can be classified into three areas:
(1) High performance architectures, specially to improve computer arithmetic and accelerate compute-intensive applications, and storage systems for large amounts of data.
(2) Parallel programming and hardware support, specially task parallel programming models for heterogeneous architectures and high-performance thread synchronization based on transactional memory.
(3) Applications that require high performance computing, that process large amounts of data or that involve a great amount of calculations.
Main Networks
HiPEAC-7: High Performance, Edge and Cloud Computing
Funding Agency; Identifier: European Union (HORIZON); CL4-2021-101069836
Period: December 1, 2022 to May 31, 2025
Coordinator: Koen de Bosschere (Ghent University)
Participant Entities: Over 400 institutions
Funding Amount: 2,125,000 €
WWW: HiPEAC
[click for more info]
[click for less info]
The objective of HiPEAC is to stimulate and reinforce the development of the dynamic European computing ecosystem that supports the digital transformation of Europe. It does so by guiding the future research and innovation of key digital, enabling, and emerging technologies, sectors, and value chains. The longer term goal is to strengthen European leadership in the global data economy and to accelerate and steer the digital and green transitions through human-centred technologies and innovations. This will be achieved via mobilising and connecting European partnerships and stakeholders to be involved in the research, innovation and development of computing and systems technologies. They will provide roadmaps supporting the creation of next-generation computing technologies, infrastructures, and service platforms.
The HiPEAC CSA proposal directly addresses the research, innovation, and development of next generation computing and systems technologies and applications. The overall goal is to support the European value chains and value networks in computing and systems technologies across the computing continuum from cloud to edge computing to the Internet of Things (IoT).
HiPEAC-6: High Performance Embedded Architecture and Compilation
Funding Agency; Identifier: European Union (H2020); ICT-2019-871174
Period: December 1, 2019 to February 28, 2023
Coordinator: Koen de Bosschere (Ghent University)
Participant Entities: Over 400 institutions
Funding Amount: 2,104,948.25 €
WWW: HiPEAC
[click for more info]
[click for less info]
Cyber-physical systems combine physical devices with computational resources for control and communication. Embedded computing is key for computers to interact directly with the physical world. The most common cyber-physical systems are modern cars, in which computers control the engine, braking, vehicle stability and support the driver. Cyber-physical systems are also present in energy networks, factories, automated warehouses as well as aeroplanes or trains. The EU-funded HiPEAC project is a coordination and support action that aims to structure, connect and cross-fertilise the European academic and industrial research and innovation communities in embedded computing and cyber-physical systems. It will bring together all actors and stakeholders in the field of cyber-physical systems of systems (CPSoS).
HPC4AI: High-Performance Computing for Artificial Intelligence (Thematic Network)
Funding Agency; Identifier: National Government (MINECO); TIN2017-90731-REDT
Period: July 1, 2018 to June 30, 2020
Coordinator: Mateo Valero (BSC - Barcelona Supercomputing Center)
Participant Entities: BSC/CNS, U.C. Madrid, U.P. Madrid, U. Malaga, IIA-CSIC, U. Granada, U. Santiago Compostela
Funding Amount: 10,000 €
[click for more info]
[click for less info]
The interaction between High-Performance computing (HPC) and Artificial Intelligence (AI) is creating a new horizon, "High-Performance Artificial Intelligence" (HPAI), that is fueling the growth of platforms, applications and products empowered by AI. Although AI is not a new field of research, recent advances in HPC have given AI the necessary tools to become a game-changing technology. Enabled by supercomputing technologies, AI techniques are making AI algorithms practical for many new use cases, and drawing a significant amount of interest from the private sector.
There are three main ingredients: large amounts of data, generated by processes and sensors or gathered from digital sources; an increase in available computational power; and the emergence of economically attractive use cases. These three components combined are giving birth to a new generation of Intelligent Machines that can automate complex tasks and decision-making processes, complementing (and eventually replacing) machines and people in an accelerated fashion. HPAI combines the intensive use of HPC (statistical analysis and numerically intensive optimization) with AI (search algorithms, un/supervised learning and autonomous agents) to impact the IT industry and investment priorities in science and technology to influence all aspects of human life and raise its own great challenges.
The activities of the HPC4IA network have been designed with the purpose of: 1) promoting the dissemination of the knowledge and methodologies used throughout the Spanish community; 2) allowing close collaboration between the participating groups with the aim of optimizing and improving current applications; and 3) foster participation in HPC4IA initiatives in Europe, ensuring the position of research groups on the international scene.
CAPAP-H: Network of High Performance Computing on Heterogeneous Parallel Architectures
Coordinator: Trasgo Group (University of Valladolid)
WWW: CAPAP-H
[click for more info]
[click for less info]
The main objetive of the network is fostering the development and use of novel techniques and methodologies to enable high performance computing in heterogeneous architectures.
The network is organized in eight work groups:
(1) Models and tools for programming, debugging and performance analysis of parallel systems
(2) Scheduling and workload balance
(3) Optimizations for hardware accelerators and unconvenctional platforms
(4) High performance applications and scalability
(5) Applications and tools for grid and cloud computing
(6) Applications on mobile platforms
(7) Power efficiency in heterogeneous systems
(8) Fault tolerance and resilience in high performance computing
SyeC: Supercomputing and eScience (Network of Excellence)
Funding Agency; Identifier: National Government (MINECO); TIN2014-52608-REDC
Period: December 1, 2014 to November 30, 2016
Principal Investigator: Mateo Valero (BSC - Barcelona Supercomputing Center)
Coordinator: BSC/CNS, IRB, IAC, CSIC, FCRG, CIEMAT, CNIO, U. Barcelona, U. Cantabria, U.C. Madrid, U.P. Madrid, U.A. Madrid, U. Malaga, U.A. Barcelona, U. Zaragoza, U. Valencia
Funding Amount: 59,000 €
WWW: SyeC
[click for more info]
[click for less info]
Numerical simulation, together with experimentation and theory, constitute the three basic paradigms for the progress in science and engineering. Supercomputers are tools for the simulation of complex processes that require a very large number of operations to be solved. The performance of supercomputers has been improving by three orders of magnitude every decade, made possible by technology scaling and advances in system design (both hardware and software) following an evolutionary approach.
For Exascale the entire system architecture is expected to include a large number of architectural innovations, mainly due to the rapid and disruptive changes anticipated in the design of processor, memory, interconnect and storage technologies over the next decade. To achieve the Exascale "Power-Performance-Resilience-Productivity" complex goal, a revolutionary approach will be required based on a strong software and hardware application-driven co-design strategy at its center.
The SyeC network activities have been designed with the purpose of 1) favoring the dissemination of the know-how and skills for general use by the community, 2) enabling close collaborations between groups from different scientific domains to either optimize current algorithms and applications or re-use findings and experiences across disciplines, and 3) fostering the participation of group leaders in international HPC and BigData co-design initiatives and consortia, securing the position of Spanish research groups in the international scene.
Older Research Projects
ACAM3: Architectures, Compilers and Applications in Multiprocessors
Funding Agency; Identifier: National Government (MINECO); TIN2010-16144
Period: January 1, 2011 to December 31, 2013
Principal Investigator: Emilio L. Zapata
Participant Entities: University of Malaga
Funding Amount: 610,082 €
[click for more info]
[click for less info]
This project proposes a series of research lines interrelated and developed in the context of high-performance computing (HPC). The project proposes new challenges that arise as a consequence of the rapid evolution of HPC systems. Within this framework, the project focuses on three broad areas:
(1) Parallelism exploitation: we will provide solutions at language, execution model, compiler and runtime levels to improve the performance of new homogenous and heterogeneous architectures, both in performance and productivity.
(2) Architectures: we will provide hardware solutions for the processing operations in the field of computer arithmetic, specific architectures for mobile communications and architectural solutions for data-intensive applications in the context of cloud computing.
(3) Challenging applications: we will provide solutions to design and parallelize applications of great scientific and commercial interest. Specifically, in the areas of video analysis, biomedicine, bioinformatics and solar radiation.
Productivity Improvement in Irregular Codes
Funding Agency; Identifier: Local Government (Junta de Andalucia); P08 TIC-3500
Period: January 13, 2009 to January 12, 2014
Principal Investigator: Rafael Asenjo
Participant Entities: University of Malaga
Funding Amount: 163.027 €
Optimization of the Transactional Memory Model for Programming Multicore Processors
Funding Agency; Identifier: Local Government (Junta de Andalucia); P08 TIC-4341
Period: January 13, 2009 to January 12, 2013
Principal Investigator: Emilio L. Zapata
Participant Entities: University of Malaga
Funding Amount: 129,523.60 €
SyeC: Supercomputation and eScience (Consolider-Ingenio 2010)
Funding Agency; Identifier: National Government (MEC); CSD2007-00050
Period: October 1, 2007 to November 29, 2012
Principal Investigator: Mateo Valero (BSC - Barcelona Supercomputing Center)
Participant Entities: BSC/CNS, IRB, IAC, CSIC, FCRG, CIEMAT, CNIO, U. Barcelona, U. Cantabria, U.C. Madrid, U.P. Madrid, U.A. Madrid, U. Malaga, U.A. Barcelona, U. Zaragoza, U. Valencia
Funding Amount: 5,000,000 €
WWW: SyeC
[click for more info]
[click for less info]
Numerical simulation together with experimentation and theory, constitute the three basic paradigms for the progress in several science and engineering areas. Supercomputers are tools for the simulation of complex processes that require a very large number of operations to be solved. The future supercomputers will be built with thousand of chips, each of them containing hundreds of processors, interconnected by fast networks. Generally speaking, we are thinking of systems that will contain millions of processors in 5 years, with speed in the order of Petaflop/s, with Petabytes of main memory and interconnection networks between processors with a speed of several Petabit/s. From now on supercomputing progress is only going to be possible through a close cooperation between its users and hardware/software machine designers.
The main aim of this Consolider proposal is offering a national framework for research groups expert in supercomputing applications to collaborate together with expert hardware/software machine designers in order to design and use these machines efficiently in the near future. On the one hand, the actual applications have to be re-thought in order to exploit efficiently the extraordinary features of the future hardware. If not doing so, they will neither use the enormous calculation capabilities of the future systems, nor will they be executed slower than nowadays, because the future processors will not be quicker than the actual ones, due to technological and design reasons. Furthermore, it is obvious the necessity to know the application characteristics in order to decide about the design of these systems (processors, memory, interconnection network, etc.)
ARCHIVATOR: ARCHIVATOR Process - The Solution for Long-Term Archiving of Digital Data
Funding Agency; Identifier: European Union (Eureka Eurostars); E!4693
Period: April 1, 2009 to March 30, 2012
Coordinator: Rune Bjerkestrand (Cinevation, Drammen, Norway)
Principal Investigator (UMA): Oscar Plata
Participant Entities: Cinevation (Norway), In-Vision (Austria), P+S Technik (Germany), Tedial (Spain), U. Malaga, Nordisk Film Post Production (Norway), Centro de Produccion Audiovisual Autor (Spain)
Funding Amount: 7,500,000 €
[click for more info]
[click for less info]
The project aims at developing a complete solution to the problem of long-term storage of digital data ensuring its integrity. The solution consists of secure, reliable, and cost-effective long-term data archival system, leveraging the well documented archival properties of micrographic film to store digital data within an information-technology (IT) framework, where it is integrated like a any other storage system. This project utilizes specialized non-fading polyester-based photo-sensitive high resolution micrographic film to archive all forms of digital data, including, but not limited to, documents, databases, high definition images, and fully mixed-down audio visual sequences.
The workflow used in the ARCHIVATOR system is composed of four processes:
(1) Data Boxing: The process of converting digital data so that it can be recorded into the film. This process includes ingestion, formatting, and encoding of data. Metadata is also added at this point so that the film record can be easily identified and tracked. Metadata is also added to ensure data preservation.
(2) Data Recording: The process of exposing the previously boxed data onto high resolution micrographic film. High speed recording technology is used to get the job done quickly. We also focus on optimizing the data density and bit depth on the film to maximize the reproducibility of the recording created.
(3) Data Scanning: The process of regenerating the recorded data at high resolution. This step turns the data from a film-based dataset back to a computer readable digital dataset, thereby consolidating the process chain.
(4) Data Unboxing: The process that fully restores the scanned data so that it appears as it did in its original 'unboxed' form. This step involves decoding and re-formatting the data to make it fully readable.
ACAM2: Architectures, Compilers and Applications in Multiprocessors
Funding Agency; Identifier: National Government (MEC); TIN2006-01078-Consolider
Period: October 1, 2006 to September 30, 2011
Principal Investigator: Emilio L. Zapata
Participant Entities: University of Malaga
Funding Amount: 1,028,500 €
[click for more info]
[click for less info]
In this project we aim to deepen in the study of the leading high performance architectures at present time, clusters and Grid computing, as well as of the new multi-core architectures which start to be used in modern processors. In this study we will specially consider those applications that process data structured in a complex form or that they are subjected to strict temporal and/or spatial restrictions (multimedia, bioinformatics, physical systems...). Within this framework, the project is organized in three large areas:
(1) Automatic analysis and optimization: automatic analysis and exploitation of parallelism and locality for applications based on complex data structures (pointers, dynamic storage, indirections).
(2) Architectures: multimedia architectures, for graphics and video processing, and microarchitecture support for applications with complex data structures.
(3) Challenging applications: applications in the field of multimedia information (audiovisual), bioinformatics and numerical simulation (PDEs).
Efficient Video Analysis Techniques in Advanced Architectures
Funding Agency; Identifier: Local Government (Junta de Andalucia); P07 TIC-2800
Period: February 1, 2008 to January 31, 2011
Principal Investigator: Nicolas Guil
Participant Entities: University of Malaga
Funding Amount: 82.200 €
Processing of Biomedical Images on Graphic Architectures
Funding Agency; Identifier: Local Government (Junta de Andalucia); P06 TIC-2109
Period: January 1, 2007 to December 31, 2009
Principal Investigator: Manuel Ujaldon
Participant Entities: University of Malaga
Funding Amount: 129.236 €
ACAM1: Architectures, Compilers and Applications in Multiprocessors
Funding Agency; Identifier: National Government (MCYT); TIC2003-06623
Period: December 1, 2003 to November 30, 2006
Principal Investigator: Emilio L. Zapata
Participant Entities: University of Malaga
Funding Amount: 803,200 €
[click for more info]
[click for less info]
The high performance experienced by current computers is the result of combining advances in fabrication technologies (microelectronics, optics and magnetic), in architecture (new design techniques) and in compilers. All of these allows to adapt different components, with different speeds, between themselves with minimum performance degradation of the fastest components. We are all convinced that this overall performance will increase in the coming years. However new computer applications show up in the horizon that demand for higher computational power, higher storage capacity and faster communications.
In this project we will explore in depth the new multiprocessor systems (clusters) and Grid computing, considering preferentially the important problems around mass storage and data management and processing, typical of multimedia applications. We will approach this subject from three main research lines:
(1) Automatic optimization (parallelism, locality) of irregular and dynamic applications.
(2) Mass storage, multimedia, graphics and video processing architectures.
(3) Emerging applications, in fields like audiovisual information, bioinformatics and numerical simulation.
ACAMM: Architectures, Compilers and Multimedia Applications in Multiprocessors
Agency and Identifier: National Government (TIC2000-1658)
Period: January 1, 2001 to December 31, 2003
Principal Investigator: Emilio L. Zapata
Participant Entities: University of Malaga
[click for more info]
[click for less info]
Very important technological advances have taken place over the last years. Distributed shared memory multiprocessor architectures have appeared as the solution to the scalability problem in shared memory machines. Dependence analysis techniques in current compilers have improved. Multithreaded operating systems are usual on contemporary machines. OpenMP has been adopted as a standard for shared memory parallel programming. Probably the most striking achievement corresponds to the development of microelectronics, that allows to design high-performance processors for so demanding applications as multimedia. These achievements permit to cope with problems of higher complexity.
In the project we will study in depth three aspects of distributed shared memory multiprocessors:
(1) Semi-automatic parallelization of numeric, graphic and video applications.
(2) Design of automatic parallelization tools for programs with complex and dynamic data structures.
(3) Design of arithmetic processors for graphics and image/video analysis.
Older Networks
HiPEAC-5: High Performance and Embedded Architecture and Compilation
Funding Agency; Identifier: European Union (H2020); ICT-2017-779656
Period: December 1, 2017 to February 29, 2020
Coordinator: Koen de Bosschere (Ghent University)
Participant Entities: Over 400 institutions
Funding Amount: 2,600,000 €
WWW: HiPEAC
HiPEAC-4: High Performance and Embedded Architecture and Compilation
Funding Agency; Identifier: European Union (H2020); ICT-2015-687698
Period: January 1, 2016 to February 28, 2018
Coordinator: Koen de Bosschere (Ghent University)
Participant Entities: Over 400 institutions
Funding Amount: 3,480,000 €
WWW: HiPEAC
HiPEAC-3: High Performance and Embedded Architecture and Compilation (Network of Excellence)
Funding Agency; Identifier: European Union (FP7); ICT-287759
Period: January 1, 2012 to February 29, 2016
Coordinator: Koen de Bosschere (Ghent University)
Participant Entities: Over 400 institutions
Funding Amount: 3,808,245 €
WWW: HiPEAC
HiPEAC-2: High Performance and Embedded Architecture and Compilation (Network of Excellence)
Funding Agency; Identifier: European Union (FP7); ICT-217068
Period: February 1, 2008 to January 31, 2012
Coordinator: Koen de Bosschere (Ghent University)
Participant Entities: Over 400 institutions
Funding Amount: 4,800,000 €
WWW: HiPEAC
HiPEAC: High Performance and Embedded Architecture and Compilation (Network of Excellence)
Funding Agency; Identifier: European Union (FP6); IST-004408
Period: September 1, 2004 to December 31, 2009
Coordinator: Mateo Valero (BSC - Barcelona Supercomputing Center)
Participant Entities: Over 400 institutions
WWW: HiPEAC
RTCTCM: Coding and Transmission of Multimedia Content (Thematic Network)
Funding Agency; Identifier: National Government (MINECO); TSI2007-30447-E
Period: March 1, 2008 to February 28, 2009
Coordinator: Joan Serra-Sagrista (UAB)
Participant Entities: About 20 groups
Funding Amount: 6,000 €