Gravitational search algorithm (GSA) has been successfully applied to many scientific and engineering applications in the past few years. In the original GSA and most of its variants, every agent learns from all the agents stored in the same elite group, namely Kbest. This type of learning strategy is in nature a fully-informed learning strategy, in which every agent has exactly the same global neighborhood topology structure. Obviously, the learning strategy overlooks the impact of environmental heterogeneity on individual behavior, which easily resulting in premature convergence and high runtime consuming. To tackle these problems, we take individual heterogeneity into account and propose a locally informed GSA (LIGSA) in this paper. To be specific, in LIGSA, each agent learns from its unique neighborhood formed by k local neighbors and the historically global best agent rather than from just the single Kbest elite group. Learning from the k local neighbors promotes LIGSA fully and quickly explores the search space as well as effectively prevents premature convergence while the guidance of global best agent can accelerate the convergence speed of LIGSA. The proposed LIGSA has been extensively evaluated on 30 CEC2014 benchmark functions with different dimensions. Experimental results reveal that LIGSA remarkably outperforms the compared algorithms in solution quality and convergence speed in general.