Our generated libraries should exist in a passing state by default. Furthermore, since no human intervention is taken on them, they are unlikely to break either.
As a result, running unit tests on them seems to add little value (but does significantly increase test execution time).
We should remove the unit, system, and sample tests from generated libraries. This should also align us better with the practices of other language teams.
System tests (for generated packages)
These tests just create the client and close it. They are not testing any behavior.
I don't believe these add any useful coverage that justify the continued maintenance burden.
Sample tests (for generated packages)
These tests appear to call service RPCs with an empty request -- failing if for any reason the request fails. In exercising these tests I have seen failures like the following:
-
For example google-cloud-apigeeregistry is failing with a random DEADLINE_EXCEEDED.
-
Another one, for google-area120-tables, we get a UNIMPLEMENTED: Received HTTP status code 404. This product is down (maybe intentionally turned down?) and their service is unreachable.
These are essentially service smoke tests. Again, they don't seem to be testing any behavior. Services should monitor there own uptime.
I don't think these add enough meaningful coverage to justify the maintenance burden.
Our generated libraries should exist in a passing state by default. Furthermore, since no human intervention is taken on them, they are unlikely to break either.
As a result, running unit tests on them seems to add little value (but does significantly increase test execution time).
We should remove the unit, system, and sample tests from generated libraries. This should also align us better with the practices of other language teams.
System tests (for generated packages)
These tests just create the client and close it. They are not testing any behavior.
I don't believe these add any useful coverage that justify the continued maintenance burden.
Sample tests (for generated packages)
These tests appear to call service RPCs with an empty request -- failing if for any reason the request fails. In exercising these tests I have seen failures like the following:
For example google-cloud-apigeeregistry is failing with a random DEADLINE_EXCEEDED.
Another one, for google-area120-tables, we get a UNIMPLEMENTED: Received HTTP status code 404. This product is down (maybe intentionally turned down?) and their service is unreachable.
These are essentially service smoke tests. Again, they don't seem to be testing any behavior. Services should monitor there own uptime.
I don't think these add enough meaningful coverage to justify the maintenance burden.