Concept Product CP121-8 x EDSFF E1.S NVMe SSD (9.5/15mm) Mobile Rack for External 5.25" Drive Bay | ICY DOCK Community

Concept Product CP121-8 x EDSFF E1.S NVMe SSD (9.5/15mm) Mobile Rack for External 5.25" Drive Bay

icydock_admin

Administrator
Staff member
Jan 21, 2024
122
9
18
Concept Product CP121
8 x EDSFF E1.S NVMe SSD (9.5/15mm) Mobile Rack for External 5.25" Drive Bay

CP121_1280x853_01.webp


👉 Product Page: https://global.icydock.com/product_319.html 👈
Any product suggestion or new idea will be highly appreciated!


Key Features

  • • Fits in Standard 5.25" Drive Bay.
  • • Supports 8 x E1.S (EDSFF) NVMe SSDs (9.5mm/15mm height).
  • • Uses 4 x MCIO 8i (SFF-TA-1016 8i) port supports PCIe 5.0 x4 of up to 128Gbps data transfer rates.
  • • Ability to mix and match E1.S NVMe SSDs in a single enclosure.
  • • Supports tool-less drive installation for E1.S NVMe SSDs.
  • • Equipped with dual 40mm detachable fans for superior cooling performance.
  • • Fan on/off control - adjust fan settings according to your HDD/SSD temperature.
  • • The aluminum drive tray acts as a heatsink to dissipate heat generated by the SSD.
  • • Includes thermal pad that conducts the heat from the drive to the heatsink tray.
  • • Ruggedized full-metal enclosure that meets flammability requirements.
  • • Removable and tool-less drive installation for easy drive maintenance.
  • • Eagle-hook tray latch – Securely holds drive trays in the enclosure.
  • • Active power technology (APT) – LED and fan only power up if a drive is installed.

CP121-banner.webp

CP121-drive_trays.webp

CP121-pcie4.webp

CP121-Cooling.webp

CP121-aluminum_heatsink.webp

CP121-led_indicator.gif

 
Last edited:
Please release the CP121 already. I’ve been waiting forever for your solution, and there is no alternative to it.
 
  • Like
Reactions: LiKenun
Anything else you guys would like to add/modify/suggest to the current CP116 design before we finally release it? Also, for any modification, could you explain your reasoning behind it, like why it is important. It would allow us to better determine whether the modification makes sense :)
 
For me, both the CP116 and the CP121 are great products to bring to the market. However, the CP122 is more important for me, as I only have one free 5.25" slot available and I need the maximum storage density. Please also offer optional m.2 NVMe adapters for both, or even better, as part of the scope of delivery. There are certainly boards that support PCIe HotSwapping and if the power supply is connected last, I see no problem in using m.2 SSDs in them.
 
both the CP116 and the CP121
Could both not be supported by the same enclosure? From the shape of the different heat sink heights, I was under the impression that an enclosure could simply support the highest density, and thicker E1.S SSD heat sinks would simply block access to some of the ports in the back if used, thereby enabling the enclosure to support a mix-and-match kind of configuration. One thing might need to change to enable mixing thicknesses and latch width: the interior ribs that guide the SSD to the connector inside and keep it in place must provide the necessary clearance for larger heat sinks to slide in. The downside is the expense and waste to support the highest density (e.g., the enclosure supports 8 SSDs, but the customer can only ever use 4).
 
That's why it's perfectly fine to offer both products separately: the 121 for high density with 8x M.2 and up to 15mm E1.S, and the 116 for up to 4x M.2 or 25mm E1.S SSDs. However, I still think it's important to support the M.2 form factors as well, since the connection to 4 PCIe lanes is equally necessary for high bandwidth. Moreover, PCIe Gen5 M.2 SSDs generate so much heat that adequate cooling is just as worthwhile for them.

A single enclosure for both makes the effort for the backplane too great if everything is to be connected with 4 PCIe lanes. The wiring of the MCIO connections would certainly be quite complex and expensive.
 
A single enclosure for both makes the effort for the backplane too great if everything is to be connected with 4 PCIe lanes. The wiring of the MCIO connections would certainly be quite complex and expensive.
Perhaps I worded it a bit confusingly before. The expense I was was referring to is the expense to me. If the product supports 8 SSDs (presumably at a higher price point) and I can only use half of them, that is a cost inefficiency borne by me.

This concept product supports both 9.5 mm and 15 mm, so there is very little preventing support of 25 mm.

The SFF-TA-1006 (E1.S) spec notes the following for the 9.5–25 mm subset of E1.S SSDs:
  • The label and fin area have the same bounding rectangle on the secondary side of the SSD, defined by:
    • The offset from the tip of the edge/plug (G), 7.5 (C8);
    • The offset of the other end, 106.75 mm (C3 − C5), yielding a length of 99.25 mm;
    • The offset from the bottom side (W), 5 mm (B9);
    • And the width, 23.75 (B8)
  • The thickness of the SSD assembly excluding the fins is always 9.5 mm (A6)
  • The only difference is the presence or absence of the fins, and if present, a thickness of either 15 mm (A9b/A10b) or 25 mm (A9a/A10a).
The guiding ribs in the ICY DOCK enclosures I have look like this:
IMG_1289.JPG

So assuming the ribs are not wide enough to intrude into the fin area, a version which supports 15 mm has very little in the way of preventing support for 25 mm. The only additional design work I think should be required is a latch that is double the width of the 15 mm one—perhaps sold separately. And with that, this concept product supports all three thicknesses with one design.

I personally would not mind paying for 8 slots but being only able to use 4. It would have been my choice of SSDs that created the limitation after all. A lower-cost CP116 could come later, but I would still favor the more flexible option of an enclosure that supports all three thicknesses. 🙂


Anything else you guys would like to add/modify/suggest to the current CP116 design before we finally release it?
This is more of a comment than a suggestion for additions or modifications, but there are many like me who have used existing ICY DOCK enclosures for SSD connectivity and organization. One thing I’ve experienced is the incredible difficulty of getting a clean PCIe 4.0 connection. Mostly, it’s the fault of the cables and then the shoddy adapters in my experience, but I might’ve come across others who swear it’s the ICY DOCK enclosure.

If not already doing so, my favored test of connection stability is connecting the PCIe device (the ICY DOCK enclosure in this context) to a motherboard slot and/or PCIe switch with PCIe advanced error reporting (AER) enabled. Then, in Windows, instability will show up as WHEA errors in the event log such as this. The worse the stability, the more frequent these log entries will be—especially when the device is under heavy use. A redriver may be necessary, but the product is only reliable if it can be shown working stably with one-meter cables under load.
 
Last edited:
I think the best approach would be if both products were released. That way, one wouldn’t have to pay more than what is actually needed. On the old German product page (https://de.icydock.com/product_323.html), you can still clearly see how I would like it with the M.2/E1.S adapters and dimensions. The CP116 remains unaffected by this and can still be released as it is; the M.2 adapters wouldn't necessarily be required there either. ;)
 
That way, one wouldn’t have to pay more than what is actually needed.
You said
the CP122 is more important for me

I presented a way for which one of the two products could be released first without precluding the use of 25 mm SSDs. If the design is already what I think it is, the extra cost is $0. 25 mm support could be added as a later separate add-on product: double-width latches. ICY DOCK could then take their sweet time perfecting the lower density CP116, which is hard-capped to 4 SSDs max.

You will have bought only this CP122 and have your 8 SSDs inside. I will have bought the CP122 and the double-width latches and have my 4 SSDs inside. We are both happy and you would not be paying for features to support my use case. 🙂

It will be the same as your use case: your M.2-to-E1.S requirement will be solved using an additional product, without adding design/manufacturing expense to this product.
On the old German product page (https://de.icydock.com/product_323.html), you can still clearly see how I would like it with the M.2/E1.S adapters and dimensions.
 
Last edited:
  • Like
Reactions: blackstone
Yes, that's true, I just had a small lapse in judgment. What worries me a bit is the changed product page and the fact that the M.2 adapters are no longer mentioned. Personally, I would prefer to have two of each variant :D. By now, the market has seen quite a bit of movement, and the larger form factors are slowly making their way into servers (so E1/3.S is being complemented by E1/3.L).

Ultimately, I’ve been keeping an eye on the 121 for over a year now, and so far it hasn’t really looked like this product will be released anytime soon, so I hope that maybe this time things will progress a bit more.
 
We understand that the NVMe environment is not yet fully mature and that compatibility issues with various setups are common. To help address these challenges, we are working to source our in-house certified cables and, eventually, an HBA card to save customers the hassle of finding the right combination.

To answer your question about using only four bays in a 8-bay high-density bay setup: we are designing a cable split into x1, x1, x1, x1 channels instead of the usual x4 or x2 configuration that most cables are in the market today. While this approach reduces the overall bandwidth, it is well-suited for achieving maximum IOPS. This limitation has minimal impact if you're primarily leveraging the SSD's random read and write capabilities.

The idea of mixing and matching SSDs within the same enclosure is intriguing. We will discuss this internally to evaluate its feasibility.
 
  • Like
Reactions: LiKenun
To answer your question about using only four bays in a 8-bay high-density bay setup: we are designing a cable split into x1, x1, x1, x1 channels instead of the usual x4 or x2 configuration that most cables are in the market today. While this approach reduces the overall bandwidth, it is well-suited for achieving maximum IOPS. This limitation has minimal impact if you're primarily leveraging the SSD's random read and write capabilities.
I'm intrigued by this split into individual lanes. 🙂 Are we talking about MCIO 4i (quad-lane) at the host end and 4 × MCIO 4i (single-lane) at the device end of the cable? I have a Broadcom P411W-32P (PCIe 4.0 switch, with SlimSAS 8i) and HighPoint Rocket 1628A (PCIe 5.0 switch, with MCIO 8i). I believe most other HBAs out on the market expect cables which are oct-lane at the host end, including Adaptec and ATTO.

I've not gotten anything narrower than quad-lane connections to work with the HBAs that I have, with either my existing ICY DOCK enclosures, or with direct-attach cables. A working setup probably requires participation from the enclosure too—one that can send sideband signals to tell the HBA what lane width to operate each port in. I'm curious what kind of assemblage your design expects the consumer to have to make everything work end to end, if it is flexible enough to fit in with existing hardware (e.g., my HBAs), and whether it provides a choice of less lanes or more lanes per SSD depending on the cable used.

The set-up I’m aiming for is PCIe 5.0 × 1 for dense, bulk storage, and PCIe 3.0 × 4 for a high-performance 3D XPoint (Optane) array. I’ve found Optane SSDs to be capable of hitting 0.5+ GB/s with single-thread, queue-depth-1, random I/O, but Intel never produced anything beyond the PCIe 4.0 generation, so putting them in single-lane operation would cripple their performance in more threaded workloads or slightly higher queue depths.
 
Last edited:
  • Like
Reactions: blackstone
I'm intrigued by this split into individual lanes. 🙂 Are we talking about MCIO 4i (quad-lane) at the host end and 4 × MCIO 4i (single-lane) at the device end of the cable? I have a Broadcom P411W-32P (PCIe 4.0 switch, with SlimSAS 8i) and HighPoint Rocket 1628A (PCIe 5.0 switch, with MCIO 8i). I believe most other HBAs out on the market expect cables which are oct-lane at the host end, including Adaptec and ATTO.

I've not gotten anything narrower than quad-lane connections to work with the HBAs that I have, with either my existing ICY DOCK enclosures, or with direct-attach cables. A working setup probably requires participation from the enclosure too—one that can send sideband signals to tell the HBA what lane width to operate each port in. I'm curious what kind of assemblage your design expects the consumer to have to make everything work end to end, if it is flexible enough to fit in with existing hardware (e.g., my HBAs), and whether it provides a choice of less lanes or more lanes per SSD depending on the cable used.

The set-up I’m aiming for is PCIe 5.0 × 1 for dense, bulk storage, and PCIe 3.0 × 4 for a high-performance 3D XPoint (Optane) array. I’ve found Optane SSDs to be capable of hitting 0.5+ GB/s with single-thread, queue-depth-1, random I/O, but Intel never produced anything beyond the PCIe 4.0 generation, so putting them in single-lane operation would cripple their performance in more threaded workloads or slightly higher queue depths.
Apologies for the confusion earlier. I meant a splitter cable that goes from x8 to x1x1x1x1x1x1x1x1 or x8 to x2x2x2x2. Since most high-end HBA cards support various levels of bifurcation, we can leverage this feature to provide scalability for higher-density drive setups.

On the enclosure side, I believe no changes would be necessary as long as we have the correct configuration on the HBA’s end. However, I may be mistaken, so I will consult with our engineering team to confirm and provide more details. Stay tuned.
 
  • Like
Reactions: LiKenun
Please release the CP121 already. I’ve been waiting forever for your solution, and there is no alternative to it.
Hi blackstone,

Thank you for your interest in this product. CP121 is expected to be available in 2025Q2.

If the design is already what I think it is
Hi LiKenun,

Our current design does allow installation of 25mm E1.S SSD. However there are some trade-offs including blocking the adjacent slot as you mentioned, so we did not list 25mm compatibility in product specification.


If you have any other question, please feel free to ask. :)
 
Hi blackstone,

Thank you for your interest in this product. CP121 is expected to be available in 2025Q2.

If you have any other question, please feel free to ask. :)
Great News! Thank you

Will there also be M.2 adapters (22110) available at launch?
 
Great News! Thank you

Will there also be M.2 adapters (22110) available at launch?
Hi blackstone,

We are currently assessing the market demand of M.2 adapter.
It will be helpful if you could help answer the following questions:
(1) It is unlikely to fit 22110 M.2 SSD on such adapter due to dimensional restriction. Would this be a concern to your application?
(2) What are the reasons that you want to use M.2 SSD in this product? (e.g. utilize existing M.2 SSD; prefer the flexibility; future-proof purpose...)
(3) How many M.2 SSDs are you planning to install in this product at the same time?
 
  • Like
Reactions: LiKenun
1. Since I have many M.2 SSDs in the 22110 format, it would be great if I could use them with this product as well. However, I also have other application scenarios for them if they don’t fit in an emergency. It wouldn’t stop me from purchasing, but it would make the product significantly more versatile.


2. Because I have many of them, and I can also use them to build a delete/clone workstation for all client and server SSDs. Additionally, it would be easy to install PCIe Gen 5 M.2 SSDs with more than 10 watts of power consumption for better cooling.


3. All 8 slots. Corresponding adapter cards and 4x MCIO8 cables are already installed in the chassis.