We introduces a novel negative sampling strategy for graph contrastive learning based on a marginal node indicator, designed to enhance learning of the global manifold in various networks. Our approach distinguishes between core and marginal nodes within network clusters, enabling the model to capture both intra-cluster commonalities and inter-cluster distinctions. By extending the sampling scope from subgraphs to clusters, our methodology facilitates comprehensive manifold learning, a capability substantiated through fine-tuning experiments with minimal labels. The flexibility of our strategy is further demonstrated through its adaptability to both homophily and heterophily networks, achieved by adjusting the number of clusters. Experimental results on synthetic and benchmark datasets, including variations of the MNIST and several social/citation benchmark networks, exhibit an average performance improvement of 2.7% in node classification tasks. This improvement is particularly pronounced in networks with higher levels of heterophily, underscoring the efficacy of our approach in complex network structures. Our method's applicability extends beyond specific models to a broader range of graph contrastive learning frameworks.