Large Language Models (LLMs) have shown significant potential in performing inferences on various tasks using heterogeneous sensors with minimal human intervention. Despite their promise, challenges such as high inference overhead and limitations on resource-constrained edge devices remain. Additionally, model hallucinations, particularly those arising from cognitive biases when interpreting numerical data, hinder performance. This work introduces a novel technique, embedding interpolation, to enhance LLMs' understanding of sensor measurements and mitigate inference overhead on edge devices. By computing embeddings through pre-computed boundary embeddings instead of directly from the input, we improve efficiency and accuracy. The effective-ness of this approach is demonstrated through visualizations with image generation models.