We wanted to check that the text could be encoded as JSON, because
conflict resolutions are passed back and forth in that format, so the
file itself must be UTF-8. However, all strings from the repository come
back without an encoding from Rugged, making them ASCII_8BIT.
We force to UTF-8, and reject if it's invalid. This still leaves the
problem of a file that 'looks like' UTF-8 (contains valid UTF-8 byte
sequences), but isn't. However:
1. If the conflicts contain the problem bytes, the user will see that
the file isn't displayed correctly.
2. If the problem bytes are outside of the conflict area, then we will
write back the same bytes when we resolve the conflicts, even though
we though the encoding was UTF-8.
These can't be resolved in the UI because if they aren't in a UTF-8
compatible encoding, they can't be rendered as JSON. Even if they could,
we would be implicitly changing the file encoding anyway, which seems
like a bad idea.
- Add match line header to expected result for `File#sections`.
- Lowercase CSS colours.
- Remove unused `diff_refs` keyword argument.
- Rename `parent` -> `parent_file`, to be more explicit.
- Skip an iteration when highlighting.