This special edition of IJPP showcases extended journal versions of the five best papers from the 2020 IEEE International Conference on Embedded Computer Systems: Architectures, Modeling and Simulation (SAMOS 2020).

SAMOS is an exceptional conference. It focuses on embedded systems but that is hardly the only aspect that distinguishes it. Perhaps the most unique and exciting aspect of this symposium (in the true meaning of the word) is that it gathers every year an ever-expanding tribe of researchers from both academia and industry on the quiet and inspiring northern mountainside of the Aegean Island of Samos suffused by the light of the Mediterranean to indulge in a moment of reflection, to build and enjoy lasting personal and professional friendships and camaraderie all while collaboratively exploring the frontiers of knowledge and science. The formal and intensive technical sessions are confined to the mornings with a lively panel or distinguished keynote speaker ending the formal part of the day and leading nicely into the afternoons, the gorgeous sunsets and the evenings reserved for informal discussions and good food against the backdrop of the inviting Aegean Sea.

This perennial spirit of SAMOS is punctuated herein by several outstanding articles on a diverse set of topics that comprise this special issue. A brief detailing of the five articles follows.

In the first paper "A Quantitative Study of Locality in GPU Caches for Memory-Divergent Workloads" the authors provide a deep analysis of data locality in GPU architectures and how this knowledge can be used to optimize the cache design.

The paper “DRAMSys4.0: An Open-Source Simulation Framework for In-Depth DRAM Analyses” presents a tool for fast but cycle-accurate DRAM simulations. It supports the latest standards about DDR and LPDDR5.

In “Fine-Grained Power Modeling of Multicore Processors using FFNNs” the authors explore a new technique for core-level modelling using feed-forward neuronal networks (FFNN). Using this method, the average error could be decreased drastically compared to the state of the art.

“Energy-Efficient Partial-Duplication Task Mapping under multiple DVFS schemes” describes a new approach for energy aware but also reliable task execution on multicore platforms.

Last but certainly not least, “AMAIX in-depth: A Generic Analytical Model for Deep Learning Accelerators” presents an analytical model for performance estimation for dedicated neuromorphic hardware. The working principle has been shown for the NVDLA accelerator, presented by Nvidia.

We hope that you enjoy reading this special issue and at least thus virtually imbibe a tad in the special spirits of SAMOS.