Every resource type within Kubernetes is managed by a "controller", an application running a watch loop and reacting to changes to the resource it's watching. The controller-manager runs and manages all the built-in controllers, but additional resources (e.g., CRDs) require user-deployed controllers.
Ingress resources are an interesting example, as they're a (the only?) built-in resource type without a built-in controller. Instead Kubernetes distributions often package an ingress controller for their cloud platform (e.g., gce-ingress). By themselves ingress resources are meaningless; it's only that separate ingress controller that makes them real within a cluster.
An ingress controller monitors ingress resources for changes (incl. creations and deletions) and updates its internal routing rules based on them.
The controller itself is exposed to the outside world with any relevant hostnames pointing to it. There are only two methods of exposing an application on a Kubernetes cluster, NodePort or LoadBalancer services; most production deployments will use a cloud-provider-configured loadbalancer.
For example, requests for foo.bar.com, will resolve to the IP of the cloud loadbalancer configured a LoadBalancer-type service, which directs requests to the ingress controller, which in turn reads the hostnames of the request and routes them to the appropriate application services.
client
└── foo.bar.com ── foo.bar.com. 60 IN CNAME abcd0123.loadbalancer.google.com
└── svc/nginx-ingress (LoadBalancer) ── deployment/nginx-ingress ── ingress/foo-bar
└── svc/foo-bar (ClusterIP) ── deployment/foo-bar
Generally ingress controller manifests come packaged with a LoadBalancer-type service to expose them, but in some occasions it may be necessary to expose them manually. Either way an ingress controller cannot route outside traffic without a public IP.