Lifelong learning is a process that involves gradual learning in dynamic environments, mirroring the efficient and robust nature of human learning. It enables neural networks to incrementally acquire new concepts from sequential experiences. However, a significant challenge in achieving lifelong learning capabilities is catastrophic forgetting. This phenomenon occurs when the network forgets previously learned knowledge while learning new concepts sequentially, due to changes in the geometric formation of the embedding space in a continual learning setting. Our study places a strong emphasis on preserving previously acquired knowledge by maintaining a consistent geometric structure in neural network.