- It was integrated into the Windows operating system beginning with Vista in 2007. Prior to ASLR, the memory locations of files and applications were either known or easily determined.
- Adding ASLR to Vista increasing the number of possible address space locations to 256, meaning attackers only have a 1 in 256 chance of finding the correct location to execute code.
- Apple began including ASLR in MAC OS X 10.5 Leopard, and Apple iOS and Google Andriod both using ASLR in 2011.
 Deception is a mechanism that attempts to distort or mislead an attacker into taking a course of action that is more suited to the goals of the defender.
A common deception defense is the use of network honeypots.
A honeypot is a commuter system that is designed to be a trap for unauthorized accesses.
Honeypots are deployed within a network to appear like normal, active systems to an outsider.
How to build honeypots
- One of the deception technique is mimicking. A honeypot attempts to mimic a real system to fool the adversary into probing and/or attacking it.
- The amount of interaction the honeypots respond to queries with information that represents a possible system within the infrastructure but unlike a normal system, it maintains a very detailed logs of all interactions. From these detailed logs, administrators can gain insight into an attacker’s goal and methods as well as put in place other measures to hopes of preventing an attack.
 Probabilistic Performance Analysis of Moving Target and Deception Reconnaissance Defenses, by Michael Crouse, in MTD15
System Model for a Multi-core Processor
Because of the long access time of main memory compared to fast processors, smaller but faster memory, called cache, are used to reduce the effective memory time as seen by a processor.
Modern processors feature a hierarchy of caches.
“Higher-level” caches, which are closer the processor core are smaller but faster than lower-level caches, which are closer to main memory.
Each core typically has two private top level caches
- 1) one for data
- 2) one for instructions
How it works
Per-Core Slice Cache
 Modern Intel processors, starting with the Sandy Bridge microarchitecture, use a more complex architecture for the LLC, to improve its performance.
The LLC is divided into pre-core slices, which are connected by a ring bus. Slices can be accessed concurrently and are effective separate caches, although the bus ensures that each core can access the full LLC (with higher latency for remote slices).
|Ring bus architecture and sliced LLC|
- A process executes in its private virtual address space, composed of pages, each representing a contiguous range of addresses.
- The typical page size is 4KB.
- Each page is mapped to an arbitrary frame in physical memory.
Top of Rack
- All servers in a track are first connected to a separate Top of Rack (ToR) switch, and then the ToR switch is connected to aggregate swtiches.
- Such a topology has currently become a mainstream network topology in a data center.
- After creating a VPC, a customer can launch instances into VPC, instead of the large EC2 network pool.
- The customer can also divide a VPC into multiple subnets, where each subnet can have a preferred availability zone to place instances.
- The private IP address of an instance in VPC is only known to its owner. It cannot be detected by other users. Thus, it can significantly reduces the threat of co-residence.
- FTP: 20,21
- SSH: 22
- Telnet: 23
- SMTP: 25, 587
- WHOIS: 43
- DNS: 53
- DHCP: 68
- Finger Protocol: 79
- HTTP: 80
- SQL: 118
- HTTPS: 443
- MySQL: 3306