1
0
Fork 0
mirror of https://github.com/jashkenas/coffeescript.git synced 2022-11-09 12:23:24 -05:00

[CS2] Comments (#4572)

* Make `addLocationDataFn` more DRY

* Style fixes

* Provide access to full parser inside our custom function running in parser.js; rename the function to lay the groundwork for adding data aside from location data

* Fix style.

* Fix style.

* Label test comments

* Update grammar to remove comment tokens; update DSL to call new helper function that preserves comments through parsing

* New implementation of compiling block comments: the lexer pulls them out of the token stream, attaching them as a property to a token; the rewriter moves the attachment around so it lives on a token that is destined to make it through to compilation (and in a good placement); and the nodes render the block comment. All tests but one pass (commented out).

* If a comment follows a class declaration, move the comment inside the class body

* Style

* Improve indentation of multiline comments

* Fix indentation for block comments, at least in the cases covered by the one failing test

* Don’t reverse the order of unshifted comments

* Simplify rewriter’s handling of comments, generalizing the special case

* Expand the list of tokens we need to avoid for passing comments through the parser; get some literal tokens to have nodes created for them so that the comments pass through

* Improve comments; fix multiline flag

* Prepare HereComments for processing line comments

* Line comments, first draft: the tests pass, but the line comments aren’t indented and sometimes trail previous lines when they shouldn’t; updated compiler output in following commit

* Updated compiler, now with line comments

* `process` doesn’t exist in the browser, so we should check for its existence first

* Update parser output

* Test that proves #4290 is fixed

* Indent line comments, first pass

* Compiled output with indented line comments

* Comments that start a new line shouldn’t trail; don’t skip comments attached to generated tokens; stop looking for indentation once we hit a newline

* Revised output

* Cleanup

* Split “multiline” line comment tokens, shifting them forward or back as appropriate

* Fix comments in module specifiers

* Abstract attaching comments to a node

* Line comments in interpolated strings

* Line comments can’t be multiline anymore

* Improve handling of blank lines and indentation of following comments that start a new line (i.e. don’t trail)

* Make comments compilation more object-oriented

* Remove lots of dead code that we don’t need anymore because a comment is never a node, only a fragment

* Improve eqJS helper

* Fix #4290 definitively, with improved output for arrays with interspersed block comments

* Add support for line comments output interspersed within arrays

* Fix mistake, don’t lose the variable we’re working on

* Remove redundant replacements

* Check for indentation only from the start of the string

* Indentations in generated JS are always multiples of two spaces (never tabs) so just look for 2+ spaces

* Update package versions; run Babel twice, once for each preset, temporarily until a Babili bug is fixed that prevents it from running with the env preset

* Don’t rely on `fragment.type`, which can break when the compiler is minified

* Updated generated docs and browser compiler

* Output block comments after function arguments

* Comments appear above scope `var` declarations; better tracking of generated `JS` tokens created only to shepherd comments through to the output

* Create new FuncGlyph node, to hold comments we want to output near the function parameters

* Block comments between `)` and `->`/`=>` get output between `)` and `{`.

* Fix indentation of comments that are the first line inside a bare mode block

* Updated output

* Full Flow example

* Updated browser compiler

* Abstract and organize comment fragment generation code; store more properties on the comment fragment objects; make `throw` behave like `return`

* Abstract token insertion code

* Add missing locationData to STRING_START token, giving it the locationData of the overall StringWithInterpolations token so that comments attached to STRING_START end up on the StringWithInterpolations node

* Allow `SUPER` tokens to carry comments

* Rescue comments from `Existence` nodes and `If` nodes’ conditions

* Rescue comments after `\` line continuation tokens

* Updated compiled output

* Updated browser compiler

* Output block comments in the same `compileFragments` method as line comments, except for inline block comments

* Comments before splice

* Updated browser compiler

* Track compiledComments as a property of Base, to ensure that it’s not a global variable

* Docs: split up the Usage section

* Docs for type annotations via Flow; updated docs output

* Update regular comments documentation

* Updated browser compiler

* Comments before soak

* Comments before static methods, and probably before `@variable =` (this) assignments generally

* Comments before ‘if exists?’, refactor comment before ‘if this.var’ to be more precise, improve helper methods

* Comments before a method that contains ‘super()’ should output above the method property, not above the ‘super.method()’ call

* Fix missing comments before `if not` (i.e. before a UNARY token)

* Fix comments before ‘for’; add test for comment before assignment if (fixed in earlier commit)

* Comments within heregexes

* Updated browser compiler

* Update description to reflect what’s now happening in compileCommentFragments

* Preserve blank lines between line comments; output “whitespace-only” line comments as blank lines, rather than `//` following by whitespace

* Better future-proof comments tests

* Comments before object destructuring; abstract method for setting comments aside before compilation

* Handle more cases of comments before or after `for` loop declaration lines

* Fix indentation of comments preceding `for` loops

* Fix comment before splat function parameter

* Catch another RegexWithInterpolations comment edge case

* Updated browser compiler

* Change heregex example to one that’s more readable; update output

* Remove a few last references to the defunct HERECOMMENT token

* Abstract location hash creation into a function

* Improved clarity per code review notes

* Updated browser compiler
This commit is contained in:
Geoffrey Booth 2017-08-02 19:34:34 -07:00 committed by GitHub
parent 6c9cf37811
commit 6d21dc5495
46 changed files with 10394 additions and 1437 deletions

View file

@ -1,5 +1,9 @@
// Generated by CoffeeScript 2.0.0-beta3
(function() {
// CoffeeScript can be used both on the server, as a command-line compiler based
// on Node.js/V8, or to run CoffeeScript directly in the browser. This module
// contains the main entry functions for tokenizing, parsing, and compiling
// source CoffeeScript into JavaScript.
var Lexer, SourceMap, base64encode, checkShebangLine, compile, formatSourcePosition, getSourceMap, helpers, lexer, packageJson, parser, sourceMaps, sources, withPrettyErrors;
({Lexer} = require('./lexer'));
@ -10,19 +14,28 @@
SourceMap = require('./sourcemap');
// Require `package.json`, which is two levels above this file, as this file is
// evaluated from `lib/coffeescript`.
packageJson = require('../../package.json');
// The current CoffeeScript version number.
exports.VERSION = packageJson.version;
exports.FILE_EXTENSIONS = ['.coffee', '.litcoffee', '.coffee.md'];
// Expose helpers for testing.
exports.helpers = helpers;
// Function that allows for btoa in both nodejs and the browser.
base64encode = function(src) {
switch (false) {
case typeof Buffer !== 'function':
return Buffer.from(src).toString('base64');
case typeof btoa !== 'function':
// The contents of a `<script>` block are encoded via UTF-16, so if any extended
// characters are used in the block, btoa will fail as it maxes out at UTF-8.
// See https://developer.mozilla.org/en-US/docs/Web/API/WindowBase64/Base64_encoding_and_decoding#The_Unicode_Problem
// for the gory details, and for the solution implemented here.
return btoa(encodeURIComponent(src).replace(/%([0-9A-F]{2})/g, function(match, p1) {
return String.fromCharCode('0x' + p1);
}));
@ -31,6 +44,8 @@
}
};
// Function wrapper to add source file information to SyntaxErrors thrown by the
// lexer/parser/compiler.
withPrettyErrors = function(fn) {
return function(code, options = {}) {
var err;
@ -38,7 +53,7 @@
return fn.call(this, code, options);
} catch (error) {
err = error;
if (typeof code !== 'string') {
if (typeof code !== 'string') { // Support `CoffeeScript.nodes(tokens)`.
throw err;
}
throw helpers.updateSyntaxError(err, code, options.filename);
@ -46,14 +61,35 @@
};
};
// For each compiled file, save its source in memory in case we need to
// recompile it later. We might need to recompile if the first compilation
// didnt create a source map (faster) but something went wrong and we need
// a stack trace. Assuming that most of the time, code isnt throwing
// exceptions, its probably more efficient to compile twice only when we
// need a stack trace, rather than always generating a source map even when
// its not likely to be used. Save in form of `filename`: `(source)`
sources = {};
// Also save source maps if generated, in form of `filename`: `(source map)`.
sourceMaps = {};
// Compile CoffeeScript code to JavaScript, using the Coffee/Jison compiler.
// If `options.sourceMap` is specified, then `options.filename` must also be
// specified. All options that can be passed to `SourceMap#generate` may also
// be passed here.
// This returns a javascript string, unless `options.sourceMap` is passed,
// in which case this returns a `{js, v3SourceMap, sourceMap}`
// object, where sourceMap is a sourcemap.coffee#SourceMap object, handy for
// doing programmatic lookups.
exports.compile = compile = withPrettyErrors(function(code, options) {
var currentColumn, currentLine, encoded, extend, filename, fragment, fragments, generateSourceMap, header, i, j, js, len, len1, map, merge, newLines, ref, ref1, sourceMapDataURI, sourceURL, token, tokens, v3SourceMap;
({merge, extend} = helpers);
options = extend({}, options);
// Always generate a source map if no filename is passed in, since without a
// a filename we have no way to retrieve this source later in the event that
// we need to recompile it to get a source map for `prepareStackTrace`.
generateSourceMap = options.sourceMap || options.inlineMap || (options.filename == null);
filename = options.filename || '<anonymous>';
checkShebangLine(filename, code);
@ -62,6 +98,8 @@
map = new SourceMap;
}
tokens = lexer.tokenize(code, options);
// Pass a list of referenced variables, so that generated variables wont get
// the same name.
options.referencedVars = (function() {
var i, len, results;
results = [];
@ -73,6 +111,7 @@
}
return results;
})();
// Check for import or export; if found, force bare mode.
if (!((options.bare != null) && options.bare === true)) {
for (i = 0, len = tokens.length; i < len; i++) {
token = tokens[i];
@ -94,7 +133,9 @@
js = "";
for (j = 0, len1 = fragments.length; j < len1; j++) {
fragment = fragments[j];
// Update the sourcemap with data from each fragment.
if (generateSourceMap) {
// Do not include empty, whitespace, or semicolon-only fragments.
if (fragment.locationData && !/^[;\s]*$/.test(fragment.code)) {
map.add([fragment.locationData.first_line, fragment.locationData.first_column], [currentLine, currentColumn], {
noReplace: true
@ -108,6 +149,7 @@
currentColumn += fragment.code.length;
}
}
// Copy the code from each fragment into the final JavaScript.
js += fragment.code;
}
if (options.header) {
@ -135,10 +177,14 @@
}
});
// Tokenize a string of CoffeeScript code, and return the array of tokens.
exports.tokens = withPrettyErrors(function(code, options) {
return lexer.tokenize(code, options);
});
// Parse a string of CoffeeScript code or an array of lexed tokens, and
// return the AST. You can then compile it by calling `.compile()` on the root,
// or traverse it by using `.traverseChildren()` with a callback.
exports.nodes = withPrettyErrors(function(source, options) {
if (typeof source === 'string') {
return parser.parse(lexer.tokenize(source, options));
@ -147,12 +193,21 @@
}
});
// This file used to export these methods; leave stubs that throw warnings
// instead. These methods have been moved into `index.coffee` to provide
// separate entrypoints for Node and non-Node environments, so that static
// analysis tools dont choke on Node packages when compiling for a non-Node
// environment.
exports.run = exports.eval = exports.register = function() {
throw new Error('require index.coffee, not this file');
};
// Instantiate a Lexer for our use here.
lexer = new Lexer;
// The real Lexer produces a generic stream of tokens. This object provides a
// thin wrapper around it, compatible with the Jison API. We can then pass it
// directly as a "Jison lexer".
parser.lexer = {
lex: function() {
var tag, token;
@ -171,14 +226,19 @@
return this.pos = 0;
},
upcomingInput: function() {
return "";
return '';
}
};
// Make all the AST nodes visible to the parser.
parser.yy = require('./nodes');
// Override Jison's default error handling function.
parser.yy.parseError = function(message, {token}) {
var errorLoc, errorTag, errorText, errorToken, tokens;
// Disregard Jison's message, it contains redundant line number information.
// Disregard the token, we take its value directly from the lexer in case
// the error is caused by a generated token which might refer to its origin.
({errorToken, tokens} = parser);
[errorTag, errorText, errorLoc] = errorToken;
errorText = (function() {
@ -193,9 +253,15 @@
return helpers.nameWhitespaceCharacter(errorText);
}
})();
// The second argument has a `loc` property, which should have the location
// data for this token. Unfortunately, Jison seems to send an outdated `loc`
// (from the previous token), so we take the location information directly
// from the lexer.
return helpers.throwSyntaxError(`unexpected ${errorText}`, errorLoc);
};
// Based on http://v8.googlecode.com/svn/branches/bleeding_edge/src/messages.js
// Modified to handle sourceMap
formatSourcePosition = function(frame, getSourceMapping) {
var as, column, fileLocation, filename, functionName, isConstructor, isMethodCall, line, methodName, source, tp, typeName;
filename = void 0;
@ -214,6 +280,7 @@
filename || (filename = "<anonymous>");
line = frame.getLineNumber();
column = frame.getColumnNumber();
// Check for a sourceMap position
source = getSourceMapping(filename, line, column);
fileLocation = source ? `${filename}:${source[0]}:${source[1]}` : `${filename}:${line}:${column}`;
}
@ -248,6 +315,9 @@
var answer;
if (sourceMaps[filename] != null) {
return sourceMaps[filename];
// CoffeeScript compiled in a browser may get compiled with `options.filename`
// of `<anonymous>`, but the browser may request the stack trace with the
// filename of the script file.
} else if (sourceMaps['<anonymous>'] != null) {
return sourceMaps['<anonymous>'];
} else if (sources[filename] != null) {
@ -262,6 +332,10 @@
}
};
// Based on [michaelficarra/CoffeeScriptRedux](http://goo.gl/ZTx1p)
// NodeJS / V8 have no support for transforming positions in stack traces using
// sourceMap, so we must monkey-patch Error to display CoffeeScript source
// positions.
Error.prepareStackTrace = function(err, stack) {
var frame, frames, getSourceMapping;
getSourceMapping = function(filename, line, column) {