Egypt's InfiniLink Lands US$10 Million To Power AI Core Tech
Founded by Ahmed Aboul-Ella and Botros George in Egypt in 2022, InfiniLink’s integrated optical transceiver chiplets technology provides the low-power, high-bandwidth-density solutions necessary for AI-driven data centers.

As artificial intelligence (AI) data centers continue to reshape the tech landscape, the demand for faster, more efficient network solutions has never been higher. In response, deeptech innovators in the region are stepping forward with advanced technologies to tackle the unique challenges of this rapidly evolving sector.
Egypt-based semiconductor startup InfiniLink is one of those companies, and in April this year, it raised US$10 million in a seed funding round with backing from Taiwan-based fabless semiconductor company MediaTek, Riyadh-based early-stage venture capital firm Sukna Ventures, Cairo-based investment firm Egypt Ventures, as well as M Empire Angels, an angel investment firm based in Cairo.
Founded by Ahmed Aboul-Ella and Botros George in Egypt in 2022, InfiniLink’s integrated optical transceiver chiplets (iOTC) technology provides the low-power, high-bandwidth-density solutions necessary for AI-driven data centers. The company’s chiplets are designed for ultra-high-speed data transfer while maintaining energy efficiency, which are critical factors as AI models become more complex. The new investment is thus set to support InfiniLink’s efforts to develop advanced data connectivity chips for AI-driven data centers.
In an interview with Inc. Arabia, George, co-founder and Chief Technology Officer (CTO) of InfiniLink, noted how the new capital will impact the startup. "So far, we have been bootstrapping our operation," he said. "The funds will be directed to speeding up our time to market. This includes expanding our teams across all functions to accelerate product development and drive customer engagements, as well as speeding up our chips manufacturing, assembly, and packaging plans."
InfiniLink’s iOTC solution, which comes in the form of small, modular components, converts electrical signals into optical signals and vice versa for high-speed data transmission. The chiplets enable efficient, high-bandwidth, and low-power connectivity, helping to address how AI workloads are managed at data centers.
"Modern AI-driven data centers present a unique challenge and a huge opportunity," George explains. "They are quite different from conventional cloud data centers. They are more centered around the computing accelerators, e.g. Nvidia graphic processing units (GPUs), where you need a huge number of these accelerators to address the computing challenges associated with large AI models."
George points out that the networking power consumption in AI data centers is almost 10 times higher than in conventional cloud data centers, requiring a more powerful infrastructure to function efficiently. "This large number of GPUs needs to be connected through very high bandwidth, low latency, and low power networks to act as one huge supercomputer that can handle the ever-growing AI models," he explains.
“Studying the explosive growth in size and complexity of AI models, it’s obvious that AI-driven data centers are facing two key challenges to keep up with AI models, either for training or inference tasks, namely capacity and power consumption," George continues. "You can view a modern AI data center as a network of networks clustered around GPUs. Where a 'scale-up network' connects a group of GPUs inside a server rack, and a 'scale-out network' connects different racks together, while a more conventional 'front-end network' connects the central processing units (CPUs) and the cloud. A lot of innovation is happening for the interconnects inside the scale-out and scale-up networks to allow more and more GPUs to be connected together e.g. 1 million GPUs, to appreciate the scale.”
George points out that InfiniLink is targeting both scale-up and scale-out networks with its solution. “For the scale-out networks connecting different racks, the incumbent technology is the pluggable modules, which are growing in capacity and energy efficiency enabled by highly integrated solutions like our iOTC technology,” he says. Additionally, InfiniLink’s chiplets can enable co-packaged optics (CPO), which is being developed and promoted by large industry players as the solution for next-generation scale-out networks.
CPO is a technology that places optical components directly next to processing chips, like GPUs or switches, inside the same package. This allows data to be converted between electrical and optical signals at the source, reducing power consumption and increasing speed. By shortening the distance data needs to travel, CPO enables faster, more efficient communication between chips, which is especially important in AI data centers handling large volumes of information.
According to George, the real opportunity today lies in scale-up networks, which connect GPUs within the same server rack. These, he explains, are currently dominated by copper, which is “running out of steam” as an enabler of network connectivity. “This represents the next frontier of opportunities for the optical interconnects, where low-power and low-cost optical solutions are crucial to be able to replace the copper interconnects, while being more tightly integrated with the GPUs dies on the same interposer inside the package," he says. "This also can be addressed by our technology in a chiplet form factor.”
And while the future is promising, George notes that bringing these technologies to market comes with significant challenges. "The biggest challenge for all these exciting technologies is the deployment timeline, while achieving the required reliability, availability, and serviceability (RAS) to be qualified for integration with a precious asset like the next generation GPU or network switch," he says.
For George and his team, staying ahead of the curve involves addressing the evolving needs of the semiconductor market. Looking ahead, InfiniLink’s vision extends beyond the current trends in AI and optical connectivity, and George tells us that as AI-driven data centers continue to grow, scaling operations will become increasingly important.
For InfiniLink, success will hinge on managing its product roadmap, understanding application-specific needs, and maintaining the pace required by the industry. "Of course, one size won’t fit all, and at InfiniLink, we are putting a lot of focus on clearly understanding the requirements of each application, and carefully architecting our technology [and] products roadmaps for a successful product-market fit, while driving the execution to meet the tight time-to-market mandated by the industry pace," George says.
And while many founders prefer not to fundraise early so as not to lose equity, George believes that securing funding at this stage will help InfiniLink scale at the speed required to stay competitive. "Our market is very fast-paced, with data center interconnects doubling capacity almost every two years," George explains. "This pace shapes everything in the operation. If you are late in one technology generation, securing design wins with premium prices becomes very challenging, and the bar only becomes higher for the next generation. So, time to market is the key. That’s why we decided to raise funds early on."
George thus advises other founders in the deeptech space to clearly assess their investment needs as they set out on their fundraising journey. "My advice to other founders in this space is that while a bootstrapped operation can get you started, it usually can’t get you going fast enough," George advises. "You need to gauge your pace by the industry dynamics and adjust accordingly."