Concept Product CP074-2 - 8 x M.2 NVMe SSD to PCIe x16 Mobile Rack Adapter Card for PCIe Expansion Slot with PCIe Bifurcation Function (FHFL) | ICY DOCK Community

Concept Product CP074-2 - 8 x M.2 NVMe SSD to PCIe x16 Mobile Rack Adapter Card for PCIe Expansion Slot with PCIe Bifurcation Function (FHFL)

icydock_admin

Administrator
Staff member
Jan 21, 2024
136
10
18
Concept Product CP074-2
8 x M.2 NVMe SSD to PCIe x16 Mobile Rack Adapter Card for PCIe Expansion Slot with PCIe Bifurcation Function (FHFL)

CP074_2_1280X853_01.webp


👉 Product Page: https://global.icydock.com/product_365.html 👈
Any product suggestion or new idea will be highly appreciated!


Key Features


• Accommodates 8 x M.2 NVMe SSD with drive lengths from 30mm to 80mm (2230 / 2242/ 2260 / 2280) in a single PCIe x16 expansion slot (Full-height, full-length)
• Featuring a Built-in PCIe Bifurcation Function, empowers users to access eight individual M.2 NVMe SSDs in systems that do not natively support PCIe Bifurcation
• Delivers unmatched transfer speeds of up to 128Gbps for each M.2 NVMe PCIe SSD.
• Effortless maintenance with removable drive trays featuring a tool-less design.
• Superior heat dissipation via integrated M.2 drive heat sinks and thermal pads.
• 50mm blower fan ensuring optimal cooling for each M.2 NVMe SSD.
• Individual drive activity LEDs: Solid green showcases power, while flashing indicates drive activity.
• Active power technology (APT) saves energy by shutting the device down when no drive is installed
• Compatible with AMD B650E/X670E PCIe 5.0 for NVMe RAID and supports Intel® VROC functions


CP074-2_compatible_pcie_slots.webp

CP074_2_tray_GIF.gif

CP074_2_compatible_pcie_slots_GIF.gif

CP074_2_speed.webp

CP74_2_led_indicator.gif

CP074-2_VROC.webp

 
Last edited:
Hello is there any update to this product?

Also would you consider making a cheaper PCIe 3.0 version or PCIe 4.0 version (using asmedia new pcie switch chip ;))

I really this concept, because you don't need bifurcation, cables, adapters, making it very useful for consumer/workstation Intel & AMD motherboards with limited PCIe slots.

I'm just scared of the price especially for an 5.0 version, where as a 3.0 version would be cheaper and still meet the bandwidth needs for alot of users :) (especially for homelabs or a NAS)

Also a curious question, is hotswap for the M.2s possible since it uses a PCIe switch chip instead of relying on the motherboard?
 
Last edited:
Hello is there any update to this product?

Also would you consider making a cheaper PCIe 3.0 version or PCIe 4.0 version (using asmedia new pcie switch chip ;))

I really this concept, because you don't need bifurcation, cables, adapters, making it very useful for consumer/workstation Intel & AMD motherboards with limited PCIe slots.

I'm just scared of the price especially for an 5.0 version, where as a 3.0 version would be cheaper and still meet the bandwidth needs for alot of users :) (especially for homelabs or a NAS)

Also a curious question, is hotswap for the M.2s possible since it uses a PCIe switch chip instead of relying on the motherboard?
Hi Kingkaido,

Thank you for your interest in this concept product.
We are currently evaluating the PCIe 4.0 version of this concept product, and we are actively searching for the chip solution that can allow us to create the best cost-effective product.
As for hot-swap function, M.2 NVMe SSDs are not designed to be hot-swappable, so we would advise against it even if there is a PCIe switch chip managing them.
 
Hi team,

I'm missing some info:
- that autobifurcation - how it does work? With more then 4 drives it uses 2 lanes per SSD, with 4 or less SSD drives 4 lanes?
- do you have to use some specific number of drives, like pairs at least or any number from 1-8?
- can you mix and match SSD PCIe versions? (some Gen4 with Gen5 drives)
- if you can mix and match, how it allocates the lanes?
- what about the fan? Does it have variable RPMs and some thermal table? What is the noise level(if I want to use in WS)?
- I believe it is also backward compatible with PCIe 3.0 and 4.0, right?

PS: great device as always Icydock team
PS2: I believe you were offering also some testing, does it still exist? If so, I'm interested in this device for sure
 
Hi team,

I'm missing some info:
- that autobifurcation - how it does work? With more then 4 drives it uses 2 lanes per SSD, with 4 or less SSD drives 4 lanes?
- do you have to use some specific number of drives, like pairs at least or any number from 1-8?
- can you mix and match SSD PCIe versions? (some Gen4 with Gen5 drives)
- if you can mix and match, how it allocates the lanes?
- what about the fan? Does it have variable RPMs and some thermal table? What is the noise level(if I want to use in WS)?
- I believe it is also backward compatible with PCIe 3.0 and 4.0, right?

PS: great device as always Icydock team
PS2: I believe you were offering also some testing, does it still exist? If so, I'm interested in this device for sure
Great questions! I'll have our PM team to answer your questions. Let's wait for their response.
 
Hi team,

I'm missing some info:
- that autobifurcation - how it does work? With more then 4 drives it uses 2 lanes per SSD, with 4 or less SSD drives 4 lanes?
- do you have to use some specific number of drives, like pairs at least or any number from 1-8?
- can you mix and match SSD PCIe versions? (some Gen4 with Gen5 drives)
- if you can mix and match, how it allocates the lanes?
- what about the fan? Does it have variable RPMs and some thermal table? What is the noise level(if I want to use in WS)?
- I believe it is also backward compatible with PCIe 3.0 and 4.0, right?

PS: great device as always Icydock team
PS2: I believe you were offering also some testing, does it still exist? If so, I'm interested in this device for sure
Hi Martin,

Thank you for your interest in this product. Below are the responses to your questions. However, please note that this is a concept product, and its final specifications are still subject to change during our development process.

- that autobifurcation - how it does work? With more then 4 drives it uses 2 lanes per SSD, with 4 or less SSD drives 4 lanes?
The bandwidth will be allocated automatically based on the number of SSDs accessed, which means even if 8 SSDs are installed, when accessing only 4 SSDs out of 8 SSDs, each accessed SSD could still have x4 lanes available.
- do you have to use some specific number of drives, like pairs at least or any number from 1-8?
There is no restriction on the number of drives used.
- can you mix and match SSD PCIe versions? (some Gen4 with Gen5 drives)
Yes.
- if you can mix and match, how it allocates the lanes?
It will allocate PCIe lanes as described above.
- what about the fan? Does it have variable RPMs and some thermal table? What is the noise level(if I want to use in WS)?
The fan speed could be adjusted to High/Low/Off. Noise level is approximately 33dB.
- I believe it is also backward compatible with PCIe 3.0 and 4.0, right?
Yes.
- I believe you were offering also some testing, does it still exist? If so, I'm interested in this device for sure
We keep record of customer interests. If there is sample testing opportunity available, we will contact you first.

I hope these responses addressed your questions. If there is anything you would like to clarify, please feel free to let us know. Thank you.
 
The autobifurcation function is fascinating. I'm intrigued on its performance using a motherboard/chipset that doesn't normally support bifurcation or those that only supporting up to 3-4 drives typical of consumer grade motherboards (split x4 lanes across max of four drives).

If one were to completely fill this with 8x NVME drives and put them all under a single ZFS pool of lets say four mirrored VDEVs, that would mean all eight drives are getting accessed concurrently at all times. Does that then split each drive to x2 lanes each? Could this pose an issue or error in accessing the drives reliably or not dropping from a ZFS pool randomly if bandwidth is getting choked down that much?

Very curious to test this under different ZFS configurations and performance impacts when relying on the built-in autobifurcation features on a consumer motherboard with limited lanes.
 
The autobifurcation function is fascinating. I'm intrigued on its performance using a motherboard/chipset that doesn't normally support bifurcation or those that only supporting up to 3-4 drives typical of consumer grade motherboards (split x4 lanes across max of four drives).

If one were to completely fill this with 8x NVME drives and put them all under a single ZFS pool of lets say four mirrored VDEVs, that would mean all eight drives are getting accessed concurrently at all times. Does that then split each drive to x2 lanes each? Could this pose an issue or error in accessing the drives reliably or not dropping from a ZFS pool randomly if bandwidth is getting choked down that much?

Very curious to test this under different ZFS configurations and performance impacts when relying on the built-in autobifurcation features on a consumer motherboard with limited lanes.
Hi JackBurton,

Thank you for your interest in our concept product and for bringing up such an excellent question — your observation touches on a very important point regarding how bandwidth is handled in multi-SSD configurations.

To clarify, the current concept of CP074 involves integrating a PCIe Gen4 packet switch, and we're considering something similar to the ASMedia ASM5848. Unlike a static PCIe bifurcation setup (e.g., fixed x2 per SSD), a packet switch like ASM5848 uses fixed lane configurations for each downstream port (e.g., x4), but dynamically schedules packet traffic among ports based on I/O activity. This enables more efficient utilization of upstream bandwidth without statically partitioning lanes.

So while in theory each SSD could get x2 lanes if all are active simultaneously, in practice the switch handles traffic arbitration and lane assignment dynamically.
As for ZFS, while the PCIe switch avoids bottlenecks under typical conditions, overall performance will still depend heavily on system architecture — such as CPU thread handling, memory bandwidth (ARC), and how the ZFS pool is structured. During high-load scenarios, queuing and arbitration at the switch or controller level may introduce latency that affects throughput.
  • We are still evaluating the full interaction between the hardware setup and ZFS performance. Real-world testing under various configurations will be part of our development validation process.
Thanks again for your thoughtful question — we truly appreciate your engagement. If you have a specific use case in mind (e.g., ZFS over 8 NVMe SSDs for storage virtualization or backup arrays), we’d love to hear more about it. Your feedback helps us build better solutions.
 
This could be quite compelling at the right price point. This functions essentially like Highpoint's new NVMe switch adapter, Rocket 1628A. I think I'd almost prefer the MCIO connectors to have flexibility on installing drives in other enclosures/positions (like to a ToughArmor MB873MP-B V2 enclosure) and I think would run cooler without having the drives installed directly on the card itself. Not sure if there would be a way to adapt this concept (via cable or adapter) to do something like that.

 
Hi JackBurton,

You’re right!
We also offer enclosures for other standard form-factor spaces—such as 5.25" bays, 3.5" bays, and (Ultra) Slim ODD bays—so customers can make the most of limited chassis real estate.
Using a HighPoint Rocket 1628A to connect an enclosure gives you greater configuration flexibility, but compared with the CP074-2 it entails higher up-front costs (adapter + cable + enclosure). Whether this trade-off is worthwhile depends on your budget and on how much value rapid drive swapping brings to your application.