As per this official doc, the Kubernetes cluster is managed by the Kubernetes master, who also runs the API Server, Controller Manager, Scheduler, and etcd components.
Any machine in the cluster can run control plane components. Setup scripts, on the other hand, typically run user containers but do not start all control plane components on the same machine for simplicity's sake. For an example of a control plane setup that runs on multiple machines, see Kubernetes Components Using kubeadm to create highly available clusters.
kube-scheduler
Control plane component that watches for newly created Pods with no
assigned node, and selects a node for them to run on. Factors taken
into account for scheduling decisions include: individual and
collective resource requirements, hardware/software/policyconstraints,
affinity and anti-affinity specifications, data locality,
inter-workload interference, and deadlines.
kube-controller-manager
Control plane component that runs controller processes.
Logically, each controller is a separate process, but to reduce
complexity, they are all compiled into a single binary and run in a
single process.
Some types of these controllers are:
Node controller: Responsible for noticing and responding when
nodes go down.
Job controller: Watches for Job objects that represent one-off
tasks, then creates Pods to run those tasks to completion.
EndpointSlice controller: Populates EndpointSlice objects (to
provide a link between Services and Pods).
ServiceAccount controller: Create default ServiceAccounts for new
namespaces.
Control manager is responsible for noticing and responding when nodes go down whereas scheduler watches for newly created Pods that have no Node assigned. For every Pod that the scheduler discovers, the scheduler becomes responsible for finding the best Node for that Pod to run on.
so,Kubernetes scheduler is separate process from the controller manager and Refer this doc for more information written by Jorge Acetozi