istio: Wrong route configuration when multi-level wildcards are used

Is this the right place to submit this?

  • This is not a security vulnerability or a crashing bug
  • This is not a question about how to use Istio

Bug Description

Hello everyone, I encountered an issue while tinkering with SE and VS resources that uses wildcards. I noticed that under certain configuration I can end up with an incorrect route being generated.

Here are some steps to reproduce (archive with the CRs: resources.tar.gz):

  • Install Istio (I used 1.17.2)
  • Deploy Sleep with sidecar in istio-system
kubectl label ns istio-system istio-injection=enabled
kubectl apply -f https://raw.githubusercontent.com/istio/istio/master/samples/sleep/sleep.yaml -n istio-system
  • Deploy a SE with “host: *.hello.test.abc.example.com” host and one with “host: *.abc.example.com” You can use the resources inside my zip and deploy them with kubectl apply -f serviceentries/ -n istio-system
  • Deploy two VS, one for each host. Those VS needs to define a route to the respective wildcard host. You can deploy my two VS using kubectl apply -f virtualservices/ -n istio-system
  • Check the configdump and you will notice that the *.abc.example.com virtual host ended up with a route to *.hello.test.abc.example.com instead of the route to *.abc.example.com.
{
     "version_info": "2023-06-12T12:47:43Z/11",
     "route_config": {
      "@type": "type.googleapis.com/envoy.config.route.v3.RouteConfiguration",
      "name": "443",
      "virtual_hosts": [
       {
        "name": "*.abc.example.com:443",
        "domains": [
         "*.abc.example.com"
        ],
        "routes": [
         {
          "match": {
           "prefix": "/",
           "case_sensitive": true
          },
          "route": {
           "cluster": "outbound|443||*.hello.test.abc.example.com",
           "timeout": "15s",
           "retry_policy": {
            "retry_on": "connect-failure,refused-stream,reset,unavailable,cancelled,resource-exhausted",
            "num_retries": 3,
            "per_try_timeout": "15s",
            "retry_host_predicate": [
             {
              "name": "envoy.retry_host_predicates.previous_hosts",
              "typed_config": {
               "@type": "type.googleapis.com/envoy.extensions.retry.host.previous_hosts.v3.PreviousHostsPredicate"
              }
             }
            ],
            "host_selection_retry_max_attempts": "5"
           },
           "max_grpc_timeout": "15s"
          },
          "metadata": {
           "filter_metadata": {
            "istio": {
             "config": "/apis/networking.istio.io/v1alpha3/namespaces/istio-system/virtual-service/vs1"
            }
           }
          },
          "decorator": {
           "operation": "*.hello.test.abc.example.com:443/*"
          },
          "name": ".https-port"
         }
        ],
        "include_request_attempt_count": true
       },

I suspect that this issue originates from the fact that the wildcard *.abc.example.com of the SE matches the wildcard in the VSs. Is this the intended behavior? Can something be done to support this multi-level wildcard scenario?

Version

client version: 1.17.2
control plane version: 1.17.2
data plane version: 1.17.2 (2 proxies)

Additional Information

bug-report.tar.gz

Affected product area

  • Ambient
  • Docs
  • Installation
  • Networking
  • Performance and Scalability
  • Extensions and Telemetry
  • Security
  • Test and Release
  • User Experience
  • Developer Infrastructure
  • Upgrade
  • Multi Cluster
  • Virtual Machine
  • Control Plane Revisions

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 1
  • Comments: 22 (20 by maintainers)

Most upvoted comments

FYI I believe I have a fix for this; testing now