Is this all official standard or just convention that most people adhere to?
This is in the specification for GFF3:
Columns 4 & 5: "start" and "end"
The start and end coordinates of the feature are given in positive 1-based integer coordinates, relative to the landmark given in column one. Start is always less than or equal to end. For features that cross the origin of a circular feature (e.g. most bacterial genomes, plasmids, and some viral genomes), the requirement for start to be less than or equal to end is satisfied by making end = the position of the end + the length of the landmark feature.
For zero-length features, such as insertion sites, start equals end and the implied site is to the right of the indicated base in the direction of the landmark.
As for BED, 0-indexing is hinted at in the UCSC documentation here:
chromStart - The starting position of the feature in the chromosome or scaffold. The first base in a chromosome is numbered 0.
chromEnd - The ending position of the feature in the chromosome or scaffold. The chromEnd base is not included in the display of the feature. For example, the first 100 bases of a chromosome are defined as chromStart=0, chromEnd=100, and span the bases numbered 0-99.
If you submit data to the browser in position format (chr#:##-##), the browser assumes this information is 1-based. If you submit data in any other format (BED (chr# ## ##) or otherwise), the browser will assume it is 0-based. You can see this both in our liftOver utility and in our search bar, by entering the same numbers in position or BED format and observing the results. Similarly, any data returned by the browser in position format is 1-based, while data returned in BED, wiggle, etc is 0-based.
These are the specifications we follow for our GFF3-to-BED and other conversion utility scripts. But convention is whatever people use, and labs are known to do their own thing (as we found out with GFF3, as it happens, which broke one of our analysis pipelines). You can write tools that rely on conventions, but nothing beats healthy skepticism about how standards are interpreted and judicious use of debugging tools when there is data "smell".