they all run the same
after AFAs -
some aspects of
processors used in SSDs
some thoughts about the
custom SSD business
not so simple list of military SSD companies
memory pose new security risks?
risk reward with big "flash as RAM"
emulation in your new "flash as RAM" solution?
where are we
heading with memory intensive systems and software?
|SCM DIMM wars in SSD
servers is closely related to several multi-year technology themes in the SSD
From an SSD history perspective it can be viewed as being the
successor to 2 earlier centers of focus in the modern era of SSDs.
1999 to 2007 - the dominant focus and center of gravity was FC SAN SSD
2007 to 2014 - the dominant focus and center of gravity
was PCIe SSD accelerators.
Since 2015 - the new dominant focus and
center of gravity has been DIMM wars - which is revisiting long held
assumptions such as:-
what is memory?
where's the best place
for it to go?
and how much memory of each latency is best to have in
|the road to DIMM
|"Order of magnitude
differences between commercial products are rare in computer architecture which
may lead to the TPU becoming an archetype for domain-specific architectures...
Among the success factors of the TPU were the large matrix multiply
(65,536 8 bit systolic MACs) and the substantial software controlled on chip
Performance Analysis of a 92 TOPS Tensor Processing Unit ASIC (pdf) - a
paper by Developers at Google (June 26, 2017)|
|Editor:- June 7, 2017 - These are some of the
ideas which emerge from a slideshare -
with ReRAM from Crossbar
based on a presentation at the recent
- Standard memory busses are too slow to support the computational needs
of new distributed (and always on) AI applications which leverage IoT.
- The only way to improve ultimate "time to get answers"
performance is to integrate storage on the same die as the processor.
- ReRAM can be embedded in SoCs in any CMOS fab to deliver battery
friendly latency under 5nS.
|Tachyum says it will blow
the cobwebs off Y2K fossilized CPU performance|
|Editor:- April 7, 2017 - I was fortunate enough
to have had close relationships with technologists and marketers of high end
server CPUs in the 1990s who explained to me in detail the peformance
limitations of CPU clock speeds and memories which would prevent CPUs getting
much faster beyond the year 2000 due to physics and the lost latency due to the
coherency of signals when they left silicon and hit copper pads.|
was one of the triggers which made me reconsider the significance of the
earlier CPU-SSD equivalence and acceleration work I had stumbled across in my
work in the late 1980s and write about it in these pages when I explained (in
2003) why I thought the enterprise SSD market (which at that time was worth only
tens of millions of dollars) had the potential to become a much bigger $10
billion market by looking at server replacement costs and acceleration as the
proposition for market adoption and disregarding irrelevant concerns
about cost per gigabyte.
I was surprised these equivalencies weren't
more widely known. And that's why I recognized the significance of what the
pioneers of SSD accelerators on the SAN were doing in the early 2000s.
taken 17 years - but the clearest ever expression of the CPU GHz problem and why
server achitecture got stuck in that particular clock rut (for those of you
who don't have the semiconductor background) appears in a recent
release from Tachyum
which says (among other things)...
"The 10nm transistors in use
today are much faster than the wires that connect them. But virtually all major
processing chips were designed when just the opposite was true: transistors were
very slow compared to the wires that connected them. That design philosophy is
now baked into the industry and it is why PCs have been stuck at 3-4GHz for a
decade with "incremental model year improvements" becoming the norm.
Expecting processing chips designed for slow transistors and fast wires to still
be a competitive design when the wires are slow and the transistors are fast,
doesn't make sense."
The warm-up press release also says - "Tachyum
is set to deliver increases of more than 10x in processing performance
at fraction of the cost of any competing product. The company intends to release
a major announcement within the next month or two." ...read
|Symbolic IO reveals more|
|Editor:- February 25, 2017 -
is a company of interest which I listed in my blog -
shining companies showing the way ahead - but until this week they haven't
revealed much publicly about their technology.|
Now you can read
details in a new blog -
- written by Chris Mellor at
who saw a
demo system at an event in London.
As previously reported a key
feature of the technology is that data is coded into a compact form -
effectively a series of instructions for how to create it - with operations
using a large persistent memory (supercap protected RAM).
things Chris reports that the demo system had 160GB of raw, effectively
persistent memory capacity - which yielded with coding compression - an
effective (usable) memory capacity of 1.79TB.
Security in the system is
rooted in the fact that each system evolves its own set of replacement codes
computed on the fly and held in persistent memory - without which the raw data
is meaningless. A security sensor module located in a slot in the rack "the
Eye" can erase the data relationships codes based on GPS and other
boundary conditions being crossed (as in some
fast purge SSDs).
Editor's comments:- The data compaction and
therefore CPU utilization claims do seem credible - although the gains are
likely to be applications dependent.
Throughout the data computing
industry smart people are going back to first principles and tackling the
embedded problems of inefficiencies and lack of intelligence which are buried in
the way that data is stored and moved. The scope for improvement in CPU and
storage utilization was discussed in my 2013 article -
meet Ken - and the
enterprise SSD software event horizon.
The potential for
improvement is everywhere - not just in pre SSD era systems. For example
Radian is picking away
caused within regular flash SSDs themselves by stripping away the FTL.
is aiming to push the limits of processing with new silicon aimed at
memory centric systems. For a bigger list of companies pushing away at
datasystems limitations you'd have to read the
SSD news archive
for the past year or so.
But all new approaches have risks.
think the particular risks with Symbolic IO's architecture are these:-
- Unknown vulnerability to data corruption in the code tables.
this would be like having an encrypted system in which the keys have been lost -
but the effect of recovery would be multiplied by the fact that each raw piece
of data has higher value (due to compacting).
leverage decades of experience of data healing knowhow (and
We don't know enough about the internal resiliency architecture in Symbolic
It's reasonable to assume that there is something there.
But all companies can make mistakes as we saw in server architecture with
and in storage architecture when
Cisco discovered common
mode failure vulnerabilities in
WhipTail 's "high availability"
I expect that Symbolic will be saying much more about its reliability and data
corruption sensitivities during the next few years. In any case - Symbolic's
investment in its new data architecture will make us all rethink the bounds of
what is possible from plain hardware.
- Difficult to quantify risk of "false positive" shutdowns from the
This is a risk factor which I have written about in
the context of the fast
purge SSD market. Again this is a reliability architecture issue.
|Rambus and Xilinx partner
on FPGA in DRAM array technology|
|Editor:- October 4, 2016 - Rambus today
a license agreement with
that covers Rambus patented memory controller, SerDes and security technologies.
Rambus is also exploring the use of Xilinx FPGAs in its
research program. The SDA - powered by an FPGA paired
with 24 DIMMS - offers high DRAM memory densities and has potential uses as a
CPU offload agent (in-situ