1
0
Fork 0
mirror of https://github.com/ruby/ruby.git synced 2022-11-09 12:17:21 -05:00

mjit.c: merge MJIT infrastructure

that allows to JIT-compile Ruby methods by generating C code and
using C compiler.  See the first comment of mjit.c to know what this
file does.

mjit.c is authored by Vladimir Makarov <vmakarov@redhat.com>.
After he invented great method JIT infrastructure for MRI as MJIT,
Lars Kanis <lars@greiz-reinsdorf.de> sent the patch to support MinGW
in MJIT. In addition to merging it, I ported pthread to Windows native
threads. Now this MJIT infrastructure can be compiled on Visual Studio.

This commit simplifies mjit.c to decrease code at initial merge. For
example, this commit does not provide multiple JIT threads support.
We can resurrect them later if we really want them, but I wanted to minimize
diff to make it easier to review this patch.

`/tmp/_mjitXXX` file is renamed to `/tmp/_ruby_mjitXXX` because non-Ruby
developers may not know the name "mjit" and the file name should make
sure it's from Ruby and not from some harmful programs.  TODO: it may be
better to store this to some temporary directory which Ruby is already using
by Tempfile, if it's not bad for performance.

mjit.h: New. It has `mjit_exec` interface similar to `vm_exec`, which is
for triggering MJIT. This drops interface for AOT compared to the original
MJIT.

Makefile.in: define macros to let MJIT know the path of MJIT header.
Probably we can refactor this to reduce the number of macros (TODO).
win32/Makefile.sub: ditto.

common.mk: compile mjit.o and mjit_compile.o. Unlike original MJIT, this
commit separates MJIT infrastructure and JIT compiler code as independent
object files. As initial patch is NOT going to have ultra-fast JIT compiler,
it's likely to replace JIT compiler, e.g. original MJIT's compiler or some
future JIT impelementations which are not public now.

inits.c: define MJIT module. This is added because `MJIT.enabled?` was
necessary for testing.
test/lib/zombie_hunter.rb: skip if `MJIT.enabled?`. Obviously this
wouldn't work with current code when JIT is enabled.
test/ruby/test_io.rb: skip this too. This would make no sense with MJIT.

ruby.c: define MJIT CLI options. As major difference from original MJIT,
"-j:l"/"--jit:llvm" are renamed to "--jit-cc" because I want to support
not only gcc/clang but also cl.exe (Visual Studio) in the future. But it
takes only "--jit-cc=gcc", "--jit-cc=clang" for now. And only long "--jit"
options are allowed since some Ruby committers preferred it at Ruby
developers Meeting on January, and some of options are renamed.
This file also triggers to initialize MJIT thread and variables.
eval.c: finalize MJIT worker thread and variables.
test/ruby/test_rubyoptions.rb: fix number of CLI options for --jit.

thread_pthread.c: change for pthread abstraction in MJIT. Prefix rb_ for
functions which are used by other files.
thread_win32.c: ditto, for Windows.  Those pthread porting is one of major
works that YARV-MJIT created, which is my fork of MJIT, in Feature 14235.
thread.c: follow rb_ prefix changes

vm.c: trigger MJIT call on VM invocation. Also trigger `mjit_mark` to avoid
SEGV by race between JIT and GC of ISeq. The improvement was provided by
wanabe <s.wanabe@gmail.com>.
In JIT compiler I created and am going to add in my next commit, I found
that having `mjit_exec` after `vm_loop_start:` is harmful because the
JIT-ed function doesn't proceed other ISeqs on RESTORE_REGS of leave insn.
Executing non-FINISH frame is unexpected for my JIT compiler and
`exception_handler` triggers executions of such ISeqs. So `mjit_exec`
here should be executed only when it directly comes from `vm_exec` call.
`RubyVM::MJIT` module and `.enabled?` method is added so that we can skip
some tests which don't expect JIT threads or compiler file descriptors.

vm_insnhelper.h: trigger MJIT on method calls during VM execution.

vm_core.h: add fields required for mjit.c. `bp` must be `cfp[6]` because
rb_control_frame_struct is likely to be casted to another struct. The
last position is the safest place to add the new field.
vm_insnhelper.c: save initial value of cfp->ep as cfp->bp. This is an
optimization which are done in both MJIT and YARV-MJIT. So this change
is added in this commit. Calculating bp from ep is a little heavy work,
so bp is kind of cache for it.

iseq.c: notify ISeq GC to MJIT. We should know which iseq in MJIT queue
is GCed to avoid SEGV.  TODO: unload some GCed units in some safe way.

gc.c: add hooks so that MJIT can wait GC, and vice versa. Simultaneous
JIT and GC executions may cause SEGV and so we should synchronize them.

cont.c: save continuation information in MJIT worker. As MJIT shouldn't
unload JIT-ed code which is being used, MJIT wants to know full list of
saved execution contexts for continuation and detect ISeqs in use.

mjit_compile.c: added empty JIT compiler so that you can reuse this commit
to build your own JIT compiler. This commit tries to compile ISeqs but
all of them are considered as not supported in this commit. So you can't
use JIT compiler in this commit yet while we added --jit option now.

Patch author: Vladimir Makarov <vmakarov@redhat.com>.

Contributors:
Takashi Kokubun <takashikkbn@gmail.com>.
wanabe <s.wanabe@gmail.com>.
Lars Kanis <lars@greiz-reinsdorf.de>.

Part of Feature 12589 and 14235.

git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@62189 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
This commit is contained in:
k0kubun 2018-02-04 06:58:09 +00:00
parent b2de4e0bee
commit fd44a5777f
20 changed files with 1713 additions and 134 deletions

View file

@ -69,7 +69,7 @@ debugflags = @debugflags@
warnflags = @warnflags@ @strict_warnflags@
cppflags = @cppflags@
XCFLAGS = @XCFLAGS@
CPPFLAGS = @CPPFLAGS@ $(INCFLAGS)
CPPFLAGS = @CPPFLAGS@ $(INCFLAGS) -DMJIT_HEADER_BUILD_DIR=\""$(EXTOUT)/include/$(arch)"\" -DLIBRUBYARG_SHARED=\""$(LIBRUBYARG_SHARED)"\" -DLIBRUBY_LIBDIR=\""$(prefix)/lib"\" -DMJIT_HEADER_INSTALL_DIR=\""$(prefix)/include/$(RUBY_BASE_NAME)-$(ruby_version)/$(arch)"\"
LDFLAGS = @STATIC@ $(CFLAGS) @LDFLAGS@
EXTLDFLAGS = @EXTLDFLAGS@
XLDFLAGS = @XLDFLAGS@ $(EXTLDFLAGS)

View file

@ -96,6 +96,8 @@ COMMONOBJS = array.$(OBJEXT) \
load.$(OBJEXT) \
marshal.$(OBJEXT) \
math.$(OBJEXT) \
mjit.$(OBJEXT) \
mjit_compile.$(OBJEXT) \
node.$(OBJEXT) \
numeric.$(OBJEXT) \
object.$(OBJEXT) \
@ -1522,6 +1524,7 @@ cont.$(OBJEXT): {$(VPATH)}internal.h
cont.$(OBJEXT): {$(VPATH)}io.h
cont.$(OBJEXT): {$(VPATH)}method.h
cont.$(OBJEXT): {$(VPATH)}missing.h
cont.$(OBJEXT): {$(VPATH)}mjit.h
cont.$(OBJEXT): {$(VPATH)}node.h
cont.$(OBJEXT): {$(VPATH)}onigmo.h
cont.$(OBJEXT): {$(VPATH)}oniguruma.h
@ -1773,6 +1776,7 @@ eval.$(OBJEXT): {$(VPATH)}io.h
eval.$(OBJEXT): {$(VPATH)}iseq.h
eval.$(OBJEXT): {$(VPATH)}method.h
eval.$(OBJEXT): {$(VPATH)}missing.h
eval.$(OBJEXT): {$(VPATH)}mjit.h
eval.$(OBJEXT): {$(VPATH)}node.h
eval.$(OBJEXT): {$(VPATH)}onigmo.h
eval.$(OBJEXT): {$(VPATH)}oniguruma.h
@ -1833,6 +1837,7 @@ gc.$(OBJEXT): {$(VPATH)}internal.h
gc.$(OBJEXT): {$(VPATH)}io.h
gc.$(OBJEXT): {$(VPATH)}method.h
gc.$(OBJEXT): {$(VPATH)}missing.h
gc.$(OBJEXT): {$(VPATH)}mjit.h
gc.$(OBJEXT): {$(VPATH)}node.h
gc.$(OBJEXT): {$(VPATH)}onigmo.h
gc.$(OBJEXT): {$(VPATH)}oniguruma.h
@ -1973,6 +1978,7 @@ iseq.$(OBJEXT): {$(VPATH)}iseq.c
iseq.$(OBJEXT): {$(VPATH)}iseq.h
iseq.$(OBJEXT): {$(VPATH)}method.h
iseq.$(OBJEXT): {$(VPATH)}missing.h
iseq.$(OBJEXT): {$(VPATH)}mjit.h
iseq.$(OBJEXT): {$(VPATH)}node.h
iseq.$(OBJEXT): {$(VPATH)}node_name.inc
iseq.$(OBJEXT): {$(VPATH)}onigmo.h
@ -1987,6 +1993,15 @@ iseq.$(OBJEXT): {$(VPATH)}util.h
iseq.$(OBJEXT): {$(VPATH)}vm_core.h
iseq.$(OBJEXT): {$(VPATH)}vm_debug.h
iseq.$(OBJEXT): {$(VPATH)}vm_opts.h
mjit.$(OBJEXT): $(top_srcdir)/revision.h
mjit.$(OBJEXT): {$(VPATH)}mjit.c
mjit.$(OBJEXT): {$(VPATH)}mjit.h
mjit.$(OBJEXT): {$(VPATH)}ruby_assert.h
mjit.$(OBJEXT): {$(VPATH)}version.h
mjit.$(OBJEXT): {$(VPATH)}vm_core.h
mjit_compile.$(OBJEXT): {$(VPATH)}internal.h
mjit_compile.$(OBJEXT): {$(VPATH)}mjit_compile.c
mjit_compile.$(OBJEXT): {$(VPATH)}vm_core.h
load.$(OBJEXT): $(CCAN_DIR)/check_type/check_type.h
load.$(OBJEXT): $(CCAN_DIR)/container_of/container_of.h
load.$(OBJEXT): $(CCAN_DIR)/list/list.h
@ -2460,6 +2475,7 @@ ruby.$(OBJEXT): {$(VPATH)}internal.h
ruby.$(OBJEXT): {$(VPATH)}io.h
ruby.$(OBJEXT): {$(VPATH)}method.h
ruby.$(OBJEXT): {$(VPATH)}missing.h
ruby.$(OBJEXT): {$(VPATH)}mjit.h
ruby.$(OBJEXT): {$(VPATH)}node.h
ruby.$(OBJEXT): {$(VPATH)}onigmo.h
ruby.$(OBJEXT): {$(VPATH)}oniguruma.h
@ -2817,6 +2833,7 @@ vm.$(OBJEXT): {$(VPATH)}io.h
vm.$(OBJEXT): {$(VPATH)}iseq.h
vm.$(OBJEXT): {$(VPATH)}method.h
vm.$(OBJEXT): {$(VPATH)}missing.h
vm.$(OBJEXT): {$(VPATH)}mjit.h
vm.$(OBJEXT): {$(VPATH)}node.h
vm.$(OBJEXT): {$(VPATH)}onigmo.h
vm.$(OBJEXT): {$(VPATH)}oniguruma.h

9
cont.c
View file

@ -13,6 +13,7 @@
#include "vm_core.h"
#include "gc.h"
#include "eval_intern.h"
#include "mjit.h"
/* FIBER_USE_NATIVE enables Fiber performance improvement using system
* dependent method such as make/setcontext on POSIX system or
@ -110,6 +111,8 @@ typedef struct rb_context_struct {
rb_jmpbuf_t jmpbuf;
rb_ensure_entry_t *ensure_array;
rb_ensure_list_t *ensure_list;
/* Pointer to MJIT info about the continuation. */
struct mjit_cont *mjit_cont;
} rb_context_t;
@ -363,6 +366,9 @@ cont_free(void *ptr)
#endif
RUBY_FREE_UNLESS_NULL(cont->saved_vm_stack.ptr);
if (mjit_init_p && cont->mjit_cont != NULL) {
mjit_cont_free(cont->mjit_cont);
}
/* free rb_cont_t or rb_fiber_t */
ruby_xfree(ptr);
RUBY_FREE_LEAVE("cont");
@ -547,6 +553,9 @@ cont_init(rb_context_t *cont, rb_thread_t *th)
cont->saved_ec.local_storage = NULL;
cont->saved_ec.local_storage_recursive_hash = Qnil;
cont->saved_ec.local_storage_recursive_hash_for_trace = Qnil;
if (mjit_init_p) {
cont->mjit_cont = mjit_cont_new(&cont->saved_ec);
}
}
static rb_context_t *

3
eval.c
View file

@ -17,6 +17,7 @@
#include "gc.h"
#include "ruby/vm.h"
#include "vm_core.h"
#include "mjit.h"
#include "probes_helper.h"
NORETURN(void rb_raise_jump(VALUE, VALUE));
@ -218,6 +219,8 @@ ruby_cleanup(volatile int ex)
}
}
mjit_finish(); /* We still need ISeqs here. */
ruby_finalize_1();
/* unlock again if finalizer took mutexes. */

5
gc.c
View file

@ -35,6 +35,7 @@
#include <sys/types.h>
#include "ruby_assert.h"
#include "debug_counter.h"
#include "mjit.h"
#undef rb_data_object_wrap
@ -6613,6 +6614,8 @@ gc_enter(rb_objspace_t *objspace, const char *event)
GC_ASSERT(during_gc == 0);
if (RGENGC_CHECK_MODE >= 3) gc_verify_internal_consistency(Qnil);
mjit_gc_start_hook();
during_gc = TRUE;
gc_report(1, objspace, "gc_entr: %s [%s]\n", event, gc_current_status(objspace));
gc_record(objspace, 0, event);
@ -6628,6 +6631,8 @@ gc_exit(rb_objspace_t *objspace, const char *event)
gc_record(objspace, 1, event);
gc_report(1, objspace, "gc_exit: %s [%s]\n", event, gc_current_status(objspace));
during_gc = FALSE;
mjit_gc_finish_hook();
}
static void *

2
iseq.c
View file

@ -26,6 +26,7 @@
#include "insns.inc"
#include "insns_info.inc"
#include "mjit.h"
VALUE rb_cISeq;
static VALUE iseqw_new(const rb_iseq_t *iseq);
@ -79,6 +80,7 @@ rb_iseq_free(const rb_iseq_t *iseq)
RUBY_FREE_ENTER("iseq");
if (iseq) {
mjit_free_iseq(iseq); /* Notify MJIT */
if (iseq->body) {
ruby_xfree((void *)iseq->body->iseq_encoded);
ruby_xfree((void *)iseq->body->insns_info.body);

1219
mjit.c Normal file

File diff suppressed because it is too large Load diff

138
mjit.h Normal file
View file

@ -0,0 +1,138 @@
/**********************************************************************
mjit.h - Interface to MRI method JIT compiler
Copyright (C) 2017 Vladimir Makarov <vmakarov@redhat.com>.
**********************************************************************/
#ifndef RUBY_MJIT_H
#define RUBY_MJIT_H 1
#include "ruby.h"
/* Special address values of a function generated from the
corresponding iseq by MJIT: */
enum rb_mjit_iseq_func {
/* ISEQ was not queued yet for the machine code generation */
NOT_ADDED_JIT_ISEQ_FUNC = 0,
/* ISEQ is already queued for the machine code generation but the
code is not ready yet for the execution */
NOT_READY_JIT_ISEQ_FUNC = 1,
/* ISEQ included not compilable insn or some assertion failed */
NOT_COMPILABLE_JIT_ISEQ_FUNC = 2,
/* End mark */
LAST_JIT_ISEQ_FUNC = 3,
};
/* C compiler used to generate native code. */
enum rb_mjit_cc {
/* Not selected */
MJIT_CC_DEFAULT = 0,
/* GNU Compiler Collection */
MJIT_CC_GCC = 1,
/* LLVM/Clang */
MJIT_CC_CLANG = 2,
};
/* MJIT options which can be defined on the MRI command line. */
struct mjit_options {
char on; /* flag of MJIT usage */
/* Default: clang for macOS, cl for Windows, gcc for others. */
enum rb_mjit_cc cc;
/* Save temporary files after MRI finish. The temporary files
include the pre-compiled header, C code file generated for ISEQ,
and the corresponding object file. */
char save_temps;
/* Print MJIT warnings to stderr. */
char warnings;
/* Disable compiler optimization and add debug symbols. It can be
very slow. */
char debug;
/* If not 0, all ISeqs are synchronously compiled. For testing. */
unsigned int wait;
/* Number of calls to trigger JIT compilation. For testing. */
unsigned int min_calls;
/* Force printing info about MJIT work of level VERBOSE or
less. 0=silence, 1=medium, 2=verbose. */
int verbose;
/* Maximal permitted number of iseq JIT codes in a MJIT memory
cache. */
int max_cache_size;
};
typedef VALUE (*mjit_func_t)(rb_execution_context_t *, rb_control_frame_t *);
RUBY_SYMBOL_EXPORT_BEGIN
extern struct mjit_options mjit_opts;
extern int mjit_init_p;
extern void mjit_add_iseq_to_process(const rb_iseq_t *iseq);
extern mjit_func_t mjit_get_iseq_func(const struct rb_iseq_constant_body *body);
RUBY_SYMBOL_EXPORT_END
extern int mjit_compile(FILE *f, const struct rb_iseq_constant_body *body, const char *funcname);
extern void mjit_init(struct mjit_options *opts);
extern void mjit_finish(void);
extern void mjit_gc_start_hook(void);
extern void mjit_gc_finish_hook(void);
extern void mjit_free_iseq(const rb_iseq_t *iseq);
extern void mjit_mark(void);
extern struct mjit_cont *mjit_cont_new(rb_execution_context_t *ec);
extern void mjit_cont_free(struct mjit_cont *cont);
/* A threshold used to reject long iseqs from JITting as such iseqs
takes too much time to be compiled. */
#define JIT_ISEQ_SIZE_THRESHOLD 1000
/* Return TRUE if given ISeq body should be compiled by MJIT */
static inline int
mjit_target_iseq_p(struct rb_iseq_constant_body *body)
{
return (body->type == ISEQ_TYPE_METHOD || body->type == ISEQ_TYPE_BLOCK)
&& body->iseq_size < JIT_ISEQ_SIZE_THRESHOLD;
}
/* Try to execute the current iseq in ec. Use JIT code if it is ready.
If it is not, add ISEQ to the compilation queue and return Qundef. */
static inline VALUE
mjit_exec(rb_execution_context_t *ec)
{
const rb_iseq_t *iseq;
struct rb_iseq_constant_body *body;
long unsigned total_calls;
mjit_func_t func;
if (!mjit_init_p)
return Qundef;
iseq = ec->cfp->iseq;
body = iseq->body;
total_calls = ++body->total_calls;
func = body->jit_func;
if (UNLIKELY(mjit_opts.wait && mjit_opts.min_calls == total_calls && mjit_target_iseq_p(body)
&& (enum rb_mjit_iseq_func)func == NOT_ADDED_JIT_ISEQ_FUNC)) {
mjit_add_iseq_to_process(iseq);
func = mjit_get_iseq_func(body);
}
if (UNLIKELY((ptrdiff_t)func <= (ptrdiff_t)LAST_JIT_ISEQ_FUNC)) {
switch ((enum rb_mjit_iseq_func)func) {
case NOT_ADDED_JIT_ISEQ_FUNC:
if (total_calls == mjit_opts.min_calls && mjit_target_iseq_p(body)) {
mjit_add_iseq_to_process(iseq);
}
return Qundef;
case NOT_READY_JIT_ISEQ_FUNC:
case NOT_COMPILABLE_JIT_ISEQ_FUNC:
return Qundef;
default: /* to avoid warning with LAST_JIT_ISEQ_FUNC */
break;
}
}
return func(ec, ec->cfp);
}
#endif /* RUBY_MJIT_H */

18
mjit_compile.c Normal file
View file

@ -0,0 +1,18 @@
/**********************************************************************
mjit_compile.c - MRI method JIT compiler
Copyright (C) 2017 Takashi Kokubun <takashikkbn@gmail.com>.
**********************************************************************/
#include "internal.h"
#include "vm_core.h"
/* Compile ISeq to C code in F. Return TRUE if it succeeds to compile. */
int
mjit_compile(FILE *f, const struct rb_iseq_constant_body *body, const char *funcname)
{
/* TODO: Write your own JIT compiler here. */
return FALSE;
}

75
ruby.c
View file

@ -51,6 +51,8 @@
#include "ruby/util.h"
#include "mjit.h"
#ifndef HAVE_STDLIB_H
char *getenv();
#endif
@ -135,6 +137,7 @@ struct ruby_cmdline_options {
VALUE req_list;
unsigned int features;
unsigned int dump;
struct mjit_options mjit;
int safe_level;
int sflag, xflag;
unsigned int warning: 1;
@ -193,7 +196,7 @@ static void
show_usage_line(const char *str, unsigned int namelen, unsigned int secondlen, int help)
{
const unsigned int w = 16;
const int wrap = help && namelen + secondlen - 2 > w;
const int wrap = help && namelen + secondlen - 1 > w;
printf(" %.*s%-*.*s%-*s%s\n", namelen-1, str,
(wrap ? 0 : w - namelen + 1),
(help ? secondlen-1 : 0), str + namelen,
@ -238,6 +241,8 @@ usage(const char *name, int help)
M("-w", "", "turn warnings on for your script"),
M("-W[level=2]", "", "set warning level; 0=silence, 1=medium, 2=verbose"),
M("-x[directory]", "", "strip off text before #!ruby line and perhaps cd to directory"),
M("--jit", "", "enable MJIT with default options (experimental)"),
M("--jit-[option]","", "enable MJIT with an option (experimental)"),
M("-h", "", "show this message, --help for more info"),
};
static const struct message help_msg[] = {
@ -263,6 +268,16 @@ usage(const char *name, int help)
M("rubyopt", "", "RUBYOPT environment variable (default: enabled)"),
M("frozen-string-literal", "", "freeze all string literals (default: disabled)"),
};
static const struct message mjit_options[] = {
M("--jit-cc=cc", "", "C compiler to generate native code (gcc, clang)"),
M("--jit-warnings", "", "Enable printing MJIT warnings"),
M("--jit-debug", "", "Enable MJIT debugging (very slow)"),
M("--jit-wait", "", "Wait until JIT compilation is finished everytime (for testing)"),
M("--jit-save-temps", "", "Save MJIT temporary files in $TMP or /tmp (for testing)"),
M("--jit-verbose=num", "", "Print MJIT logs of level num or less to stderr (default: 0)"),
M("--jit-max-cache=num", "", "Max number of methods to be JIT-ed in a cache (default: 1000)"),
M("--jit-min-calls=num", "", "Number of calls to trigger JIT (for testing, default: 5)"),
};
int i;
const int num = numberof(usage_msg) - (help ? 1 : 0);
#define SHOW(m) show_usage_line((m).str, (m).namelen, (m).secondlen, help)
@ -281,6 +296,9 @@ usage(const char *name, int help)
puts("Features:");
for (i = 0; i < numberof(features); ++i)
SHOW(features[i]);
puts("MJIT options (experimental):");
for (i = 0; i < numberof(mjit_options); ++i)
SHOW(mjit_options[i]);
}
#define rubylib_path_new rb_str_new
@ -893,6 +911,55 @@ set_option_encoding_once(const char *type, VALUE *name, const char *e, long elen
#define set_source_encoding_once(opt, e, elen) \
set_option_encoding_once("source", &(opt)->src.enc.name, (e), (elen))
static enum rb_mjit_cc
parse_mjit_cc(const char *s)
{
if (strcmp(s, "gcc") == 0) {
return MJIT_CC_GCC;
}
else if (strcmp(s, "clang") == 0) {
return MJIT_CC_CLANG;
}
else {
rb_raise(rb_eRuntimeError, "invalid C compiler `%s' (available C compilers: gcc, clang)", s);
}
}
static void
setup_mjit_options(const char *s, struct mjit_options *mjit_opt)
{
mjit_opt->on = 1;
if (*s == 0) return;
if (strncmp(s, "-cc=", 4) == 0) {
mjit_opt->cc = parse_mjit_cc(s + 4);
}
else if (strcmp(s, "-warnings") == 0) {
mjit_opt->warnings = 1;
}
else if (strcmp(s, "-debug") == 0) {
mjit_opt->debug = 1;
}
else if (strcmp(s, "-wait") == 0) {
mjit_opt->wait = 1;
}
else if (strcmp(s, "-save-temps") == 0) {
mjit_opt->save_temps = 1;
}
else if (strncmp(s, "-verbose=", 9) == 0) {
mjit_opt->verbose = atoi(s + 9);
}
else if (strncmp(s, "-max-cache=", 11) == 0) {
mjit_opt->max_cache_size = atoi(s + 11);
}
else if (strncmp(s, "-min-calls=", 11) == 0) {
mjit_opt->min_calls = atoi(s + 11);
}
else {
rb_raise(rb_eRuntimeError,
"invalid MJIT option `%s' (--help will show valid MJIT options)", s + 1);
}
}
static long
proc_options(long argc, char **argv, ruby_cmdline_options_t *opt, int envopt)
{
@ -1245,6 +1312,9 @@ proc_options(long argc, char **argv, ruby_cmdline_options_t *opt, int envopt)
opt->verbose = 1;
ruby_verbose = Qtrue;
}
else if (strncmp("jit", s, 3) == 0) {
setup_mjit_options(s + 3, &opt->mjit);
}
else if (strcmp("yydebug", s) == 0) {
if (envopt) goto noenvopt_long;
opt->dump |= DUMP_BIT(yydebug);
@ -1481,6 +1551,9 @@ process_options(int argc, char **argv, ruby_cmdline_options_t *opt)
opt->intern.enc.name = int_enc_name;
}
if (opt->mjit.on)
mjit_init(&opt->mjit);
if (opt->src.enc.name)
rb_warning("-K is specified; it is for 1.8 compatibility and may cause odd behavior");

View file

@ -1,4 +1,8 @@
# frozen_string_literal: true
# There might be compiler processes executed by MJIT
return if RubyVM::MJIT.enabled?
module ZombieHunter
def after_teardown
super

View file

@ -543,6 +543,9 @@ class TestIO < Test::Unit::TestCase
if have_nonblock?
def test_copy_stream_no_busy_wait
# JIT has busy wait on GC. It's hard to test this with JIT.
skip "MJIT has busy wait on GC. We can't test this with JIT." if RubyVM::MJIT.enabled?
msg = 'r58534 [ruby-core:80969] [Backport #13533]'
IO.pipe do |r,w|
r.nonblock = true

View file

@ -26,7 +26,7 @@ class TestRubyOptions < Test::Unit::TestCase
def test_usage
assert_in_out_err(%w(-h)) do |r, e|
assert_operator(r.size, :<=, 24)
assert_operator(r.size, :<=, 25)
longer = r[1..-1].select {|x| x.size > 80}
assert_equal([], longer)
assert_equal([], e)

View file

@ -359,7 +359,7 @@ rb_thread_debug(
if (debug_mutex_initialized == 1) {
debug_mutex_initialized = 0;
native_mutex_initialize(&debug_mutex);
rb_native_mutex_initialize(&debug_mutex);
}
va_start(args, fmt);
@ -377,31 +377,31 @@ rb_vm_gvl_destroy(rb_vm_t *vm)
{
gvl_release(vm);
gvl_destroy(vm);
native_mutex_destroy(&vm->thread_destruct_lock);
rb_native_mutex_destroy(&vm->thread_destruct_lock);
}
void
rb_nativethread_lock_initialize(rb_nativethread_lock_t *lock)
{
native_mutex_initialize(lock);
rb_native_mutex_initialize(lock);
}
void
rb_nativethread_lock_destroy(rb_nativethread_lock_t *lock)
{
native_mutex_destroy(lock);
rb_native_mutex_destroy(lock);
}
void
rb_nativethread_lock_lock(rb_nativethread_lock_t *lock)
{
native_mutex_lock(lock);
rb_native_mutex_lock(lock);
}
void
rb_nativethread_lock_unlock(rb_nativethread_lock_t *lock)
{
native_mutex_unlock(lock);
rb_native_mutex_unlock(lock);
}
static int
@ -417,15 +417,15 @@ unblock_function_set(rb_thread_t *th, rb_unblock_function_t *func, void *arg, in
RUBY_VM_CHECK_INTS(th->ec);
}
native_mutex_lock(&th->interrupt_lock);
rb_native_mutex_lock(&th->interrupt_lock);
} while (RUBY_VM_INTERRUPTED_ANY(th->ec) &&
(native_mutex_unlock(&th->interrupt_lock), TRUE));
(rb_native_mutex_unlock(&th->interrupt_lock), TRUE));
VM_ASSERT(th->unblock.func == NULL);
th->unblock.func = func;
th->unblock.arg = arg;
native_mutex_unlock(&th->interrupt_lock);
rb_native_mutex_unlock(&th->interrupt_lock);
return TRUE;
}
@ -433,15 +433,15 @@ unblock_function_set(rb_thread_t *th, rb_unblock_function_t *func, void *arg, in
static void
unblock_function_clear(rb_thread_t *th)
{
native_mutex_lock(&th->interrupt_lock);
rb_native_mutex_lock(&th->interrupt_lock);
th->unblock.func = NULL;
native_mutex_unlock(&th->interrupt_lock);
rb_native_mutex_unlock(&th->interrupt_lock);
}
static void
rb_threadptr_interrupt_common(rb_thread_t *th, int trap)
{
native_mutex_lock(&th->interrupt_lock);
rb_native_mutex_lock(&th->interrupt_lock);
if (trap) {
RUBY_VM_SET_TRAP_INTERRUPT(th->ec);
}
@ -454,7 +454,7 @@ rb_threadptr_interrupt_common(rb_thread_t *th, int trap)
else {
/* none */
}
native_mutex_unlock(&th->interrupt_lock);
rb_native_mutex_unlock(&th->interrupt_lock);
}
void
@ -585,7 +585,7 @@ thread_cleanup_func(void *th_ptr, int atfork)
if (atfork)
return;
native_mutex_destroy(&th->interrupt_lock);
rb_native_mutex_destroy(&th->interrupt_lock);
native_thread_destroy(th);
}
@ -739,10 +739,10 @@ thread_start_func_2(rb_thread_t *th, VALUE *stack_start, VALUE *register_stack_s
rb_fiber_close(th->ec->fiber_ptr);
}
native_mutex_lock(&th->vm->thread_destruct_lock);
rb_native_mutex_lock(&th->vm->thread_destruct_lock);
/* make sure vm->running_thread never point me after this point.*/
th->vm->running_thread = NULL;
native_mutex_unlock(&th->vm->thread_destruct_lock);
rb_native_mutex_unlock(&th->vm->thread_destruct_lock);
thread_cleanup_func(th, FALSE);
gvl_release(th->vm);
@ -773,7 +773,7 @@ thread_create_core(VALUE thval, VALUE args, VALUE (*fn)(ANYARGS))
th->pending_interrupt_mask_stack = rb_ary_dup(current_th->pending_interrupt_mask_stack);
RBASIC_CLEAR_CLASS(th->pending_interrupt_mask_stack);
native_mutex_initialize(&th->interrupt_lock);
rb_native_mutex_initialize(&th->interrupt_lock);
/* kick thread */
err = native_thread_create(th);
@ -4096,12 +4096,12 @@ timer_thread_function(void *arg)
* vm->running_thread switch. however it guarantees th->running_thread
* point to valid pointer or NULL.
*/
native_mutex_lock(&vm->thread_destruct_lock);
rb_native_mutex_lock(&vm->thread_destruct_lock);
/* for time slice */
if (vm->running_thread) {
RUBY_VM_SET_TIMER_INTERRUPT(vm->running_thread->ec);
}
native_mutex_unlock(&vm->thread_destruct_lock);
rb_native_mutex_unlock(&vm->thread_destruct_lock);
/* check signal */
rb_threadptr_check_signal(vm->main_thread);
@ -4936,8 +4936,8 @@ Init_Thread(void)
/* acquire global vm lock */
gvl_init(th->vm);
gvl_acquire(th->vm, th);
native_mutex_initialize(&th->vm->thread_destruct_lock);
native_mutex_initialize(&th->interrupt_lock);
rb_native_mutex_initialize(&th->vm->thread_destruct_lock);
rb_native_mutex_initialize(&th->interrupt_lock);
th->pending_interrupt_queue = rb_ary_tmp_new(0);
th->pending_interrupt_queue_checked = 0;

View file

@ -12,6 +12,7 @@
#ifdef THREAD_SYSTEM_DEPENDENT_IMPLEMENTATION
#include "gc.h"
#include "mjit.h"
#ifdef HAVE_SYS_RESOURCE_H
#include <sys/resource.h>
@ -34,16 +35,16 @@
#include <kernel/OS.h>
#endif
static void native_mutex_lock(rb_nativethread_lock_t *lock);
static void native_mutex_unlock(rb_nativethread_lock_t *lock);
void rb_native_mutex_lock(rb_nativethread_lock_t *lock);
void rb_native_mutex_unlock(rb_nativethread_lock_t *lock);
static int native_mutex_trylock(rb_nativethread_lock_t *lock);
static void native_mutex_initialize(rb_nativethread_lock_t *lock);
static void native_mutex_destroy(rb_nativethread_lock_t *lock);
static void native_cond_signal(rb_nativethread_cond_t *cond);
static void native_cond_broadcast(rb_nativethread_cond_t *cond);
static void native_cond_wait(rb_nativethread_cond_t *cond, rb_nativethread_lock_t *mutex);
static void native_cond_initialize(rb_nativethread_cond_t *cond, int flags);
static void native_cond_destroy(rb_nativethread_cond_t *cond);
void rb_native_mutex_initialize(rb_nativethread_lock_t *lock);
void rb_native_mutex_destroy(rb_nativethread_lock_t *lock);
void rb_native_cond_signal(rb_nativethread_cond_t *cond);
void rb_native_cond_broadcast(rb_nativethread_cond_t *cond);
void rb_native_cond_wait(rb_nativethread_cond_t *cond, rb_nativethread_lock_t *mutex);
void rb_native_cond_initialize(rb_nativethread_cond_t *cond, int flags);
void rb_native_cond_destroy(rb_nativethread_cond_t *cond);
static void rb_thread_wakeup_timer_thread_low(void);
static struct {
pthread_t id;
@ -84,14 +85,14 @@ gvl_acquire_common(rb_vm_t *vm)
}
while (vm->gvl.acquired) {
native_cond_wait(&vm->gvl.cond, &vm->gvl.lock);
rb_native_cond_wait(&vm->gvl.cond, &vm->gvl.lock);
}
vm->gvl.waiting--;
if (vm->gvl.need_yield) {
vm->gvl.need_yield = 0;
native_cond_signal(&vm->gvl.switch_cond);
rb_native_cond_signal(&vm->gvl.switch_cond);
}
}
@ -101,9 +102,9 @@ gvl_acquire_common(rb_vm_t *vm)
static void
gvl_acquire(rb_vm_t *vm, rb_thread_t *th)
{
native_mutex_lock(&vm->gvl.lock);
rb_native_mutex_lock(&vm->gvl.lock);
gvl_acquire_common(vm);
native_mutex_unlock(&vm->gvl.lock);
rb_native_mutex_unlock(&vm->gvl.lock);
}
static void
@ -111,28 +112,28 @@ gvl_release_common(rb_vm_t *vm)
{
vm->gvl.acquired = 0;
if (vm->gvl.waiting > 0)
native_cond_signal(&vm->gvl.cond);
rb_native_cond_signal(&vm->gvl.cond);
}
static void
gvl_release(rb_vm_t *vm)
{
native_mutex_lock(&vm->gvl.lock);
rb_native_mutex_lock(&vm->gvl.lock);
gvl_release_common(vm);
native_mutex_unlock(&vm->gvl.lock);
rb_native_mutex_unlock(&vm->gvl.lock);
}
static void
gvl_yield(rb_vm_t *vm, rb_thread_t *th)
{
native_mutex_lock(&vm->gvl.lock);
rb_native_mutex_lock(&vm->gvl.lock);
gvl_release_common(vm);
/* An another thread is processing GVL yield. */
if (UNLIKELY(vm->gvl.wait_yield)) {
while (vm->gvl.wait_yield)
native_cond_wait(&vm->gvl.switch_wait_cond, &vm->gvl.lock);
rb_native_cond_wait(&vm->gvl.switch_wait_cond, &vm->gvl.lock);
goto acquire;
}
@ -141,28 +142,28 @@ gvl_yield(rb_vm_t *vm, rb_thread_t *th)
vm->gvl.need_yield = 1;
vm->gvl.wait_yield = 1;
while (vm->gvl.need_yield)
native_cond_wait(&vm->gvl.switch_cond, &vm->gvl.lock);
rb_native_cond_wait(&vm->gvl.switch_cond, &vm->gvl.lock);
vm->gvl.wait_yield = 0;
}
else {
native_mutex_unlock(&vm->gvl.lock);
rb_native_mutex_unlock(&vm->gvl.lock);
sched_yield();
native_mutex_lock(&vm->gvl.lock);
rb_native_mutex_lock(&vm->gvl.lock);
}
native_cond_broadcast(&vm->gvl.switch_wait_cond);
rb_native_cond_broadcast(&vm->gvl.switch_wait_cond);
acquire:
gvl_acquire_common(vm);
native_mutex_unlock(&vm->gvl.lock);
rb_native_mutex_unlock(&vm->gvl.lock);
}
static void
gvl_init(rb_vm_t *vm)
{
native_mutex_initialize(&vm->gvl.lock);
native_cond_initialize(&vm->gvl.cond, RB_CONDATTR_CLOCK_MONOTONIC);
native_cond_initialize(&vm->gvl.switch_cond, RB_CONDATTR_CLOCK_MONOTONIC);
native_cond_initialize(&vm->gvl.switch_wait_cond, RB_CONDATTR_CLOCK_MONOTONIC);
rb_native_mutex_initialize(&vm->gvl.lock);
rb_native_cond_initialize(&vm->gvl.cond, RB_CONDATTR_CLOCK_MONOTONIC);
rb_native_cond_initialize(&vm->gvl.switch_cond, RB_CONDATTR_CLOCK_MONOTONIC);
rb_native_cond_initialize(&vm->gvl.switch_wait_cond, RB_CONDATTR_CLOCK_MONOTONIC);
vm->gvl.acquired = 0;
vm->gvl.waiting = 0;
vm->gvl.need_yield = 0;
@ -172,10 +173,10 @@ gvl_init(rb_vm_t *vm)
static void
gvl_destroy(rb_vm_t *vm)
{
native_cond_destroy(&vm->gvl.switch_wait_cond);
native_cond_destroy(&vm->gvl.switch_cond);
native_cond_destroy(&vm->gvl.cond);
native_mutex_destroy(&vm->gvl.lock);
rb_native_cond_destroy(&vm->gvl.switch_wait_cond);
rb_native_cond_destroy(&vm->gvl.switch_cond);
rb_native_cond_destroy(&vm->gvl.cond);
rb_native_mutex_destroy(&vm->gvl.lock);
}
#if defined(HAVE_WORKING_FORK)
@ -202,8 +203,8 @@ mutex_debug(const char *msg, void *lock)
}
}
static void
native_mutex_lock(pthread_mutex_t *lock)
void
rb_native_mutex_lock(pthread_mutex_t *lock)
{
int r;
mutex_debug("lock", lock);
@ -212,8 +213,8 @@ native_mutex_lock(pthread_mutex_t *lock)
}
}
static void
native_mutex_unlock(pthread_mutex_t *lock)
void
rb_native_mutex_unlock(pthread_mutex_t *lock)
{
int r;
mutex_debug("unlock", lock);
@ -238,8 +239,8 @@ native_mutex_trylock(pthread_mutex_t *lock)
return 0;
}
static void
native_mutex_initialize(pthread_mutex_t *lock)
void
rb_native_mutex_initialize(pthread_mutex_t *lock)
{
int r = pthread_mutex_init(lock, 0);
mutex_debug("init", lock);
@ -248,8 +249,8 @@ native_mutex_initialize(pthread_mutex_t *lock)
}
}
static void
native_mutex_destroy(pthread_mutex_t *lock)
void
rb_native_mutex_destroy(pthread_mutex_t *lock)
{
int r = pthread_mutex_destroy(lock);
mutex_debug("destroy", lock);
@ -258,8 +259,8 @@ native_mutex_destroy(pthread_mutex_t *lock)
}
}
static void
native_cond_initialize(rb_nativethread_cond_t *cond, int flags)
void
rb_native_cond_initialize(rb_nativethread_cond_t *cond, int flags)
{
int r;
# if USE_MONOTONIC_COND
@ -287,8 +288,8 @@ native_cond_initialize(rb_nativethread_cond_t *cond, int flags)
return;
}
static void
native_cond_destroy(rb_nativethread_cond_t *cond)
void
rb_native_cond_destroy(rb_nativethread_cond_t *cond)
{
int r = pthread_cond_destroy(&cond->cond);
if (r != 0) {
@ -302,12 +303,12 @@ native_cond_destroy(rb_nativethread_cond_t *cond)
*
* http://www.opensource.apple.com/source/Libc/Libc-763.11/pthreads/pthread_cond.c
*
* The following native_cond_signal and native_cond_broadcast functions
* The following rb_native_cond_signal and rb_native_cond_broadcast functions
* need to retrying until pthread functions don't return EAGAIN.
*/
static void
native_cond_signal(rb_nativethread_cond_t *cond)
void
rb_native_cond_signal(rb_nativethread_cond_t *cond)
{
int r;
do {
@ -318,20 +319,20 @@ native_cond_signal(rb_nativethread_cond_t *cond)
}
}
static void
native_cond_broadcast(rb_nativethread_cond_t *cond)
void
rb_native_cond_broadcast(rb_nativethread_cond_t *cond)
{
int r;
do {
r = pthread_cond_broadcast(&cond->cond);
} while (r == EAGAIN);
if (r != 0) {
rb_bug_errno("native_cond_broadcast", r);
rb_bug_errno("rb_native_cond_broadcast", r);
}
}
static void
native_cond_wait(rb_nativethread_cond_t *cond, pthread_mutex_t *mutex)
void
rb_native_cond_wait(rb_nativethread_cond_t *cond, pthread_mutex_t *mutex)
{
int r = pthread_cond_wait(&cond->cond, mutex);
if (r != 0) {
@ -449,7 +450,7 @@ Init_native_thread(rb_thread_t *th)
fill_thread_id_str(th);
native_thread_init(th);
#ifdef USE_UBF_LIST
native_mutex_initialize(&ubf_list_lock);
rb_native_mutex_initialize(&ubf_list_lock);
#endif
posix_signal(SIGVTALRM, null_func);
}
@ -462,14 +463,14 @@ native_thread_init(rb_thread_t *th)
#ifdef USE_UBF_LIST
list_node_init(&nd->ubf_list);
#endif
native_cond_initialize(&nd->sleep_cond, RB_CONDATTR_CLOCK_MONOTONIC);
rb_native_cond_initialize(&nd->sleep_cond, RB_CONDATTR_CLOCK_MONOTONIC);
ruby_thread_set_native(th);
}
static void
native_thread_destroy(rb_thread_t *th)
{
native_cond_destroy(&th->native_thread_data.sleep_cond);
rb_native_cond_destroy(&th->native_thread_data.sleep_cond);
}
#ifndef USE_THREAD_CACHE
@ -917,7 +918,7 @@ register_cached_thread_and_wait(void)
ts.tv_sec = tv.tv_sec + 60;
ts.tv_nsec = tv.tv_usec * 1000;
native_mutex_lock(&thread_cache_lock);
rb_native_mutex_lock(&thread_cache_lock);
{
entry->th_area = &th_area;
entry->cond = &cond;
@ -939,9 +940,9 @@ register_cached_thread_and_wait(void)
}
free(entry); /* ok */
native_cond_destroy(&cond);
rb_native_cond_destroy(&cond);
}
native_mutex_unlock(&thread_cache_lock);
rb_native_mutex_unlock(&thread_cache_lock);
return (rb_thread_t *)th_area;
}
@ -955,7 +956,7 @@ use_cached_thread(rb_thread_t *th)
struct cached_thread_entry *entry;
if (cached_thread_root) {
native_mutex_lock(&thread_cache_lock);
rb_native_mutex_lock(&thread_cache_lock);
entry = cached_thread_root;
{
if (cached_thread_root) {
@ -965,9 +966,9 @@ use_cached_thread(rb_thread_t *th)
}
}
if (result) {
native_cond_signal(entry->cond);
rb_native_cond_signal(entry->cond);
}
native_mutex_unlock(&thread_cache_lock);
rb_native_mutex_unlock(&thread_cache_lock);
}
#endif
return result;
@ -1067,7 +1068,7 @@ ubf_pthread_cond_signal(void *ptr)
{
rb_thread_t *th = (rb_thread_t *)ptr;
thread_debug("ubf_pthread_cond_signal (%p)\n", (void *)th);
native_cond_signal(&th->native_thread_data.sleep_cond);
rb_native_cond_signal(&th->native_thread_data.sleep_cond);
}
static void
@ -1101,7 +1102,7 @@ native_sleep(rb_thread_t *th, struct timeval *timeout_tv)
GVL_UNLOCK_BEGIN();
{
native_mutex_lock(lock);
rb_native_mutex_lock(lock);
th->unblock.func = ubf_pthread_cond_signal;
th->unblock.arg = th;
@ -1111,14 +1112,14 @@ native_sleep(rb_thread_t *th, struct timeval *timeout_tv)
}
else {
if (!timeout_tv)
native_cond_wait(cond, lock);
rb_native_cond_wait(cond, lock);
else
native_cond_timedwait(cond, lock, &timeout);
}
th->unblock.func = 0;
th->unblock.arg = 0;
native_mutex_unlock(lock);
rb_native_mutex_unlock(lock);
}
GVL_UNLOCK_END();
@ -1135,9 +1136,9 @@ register_ubf_list(rb_thread_t *th)
struct list_node *node = &th->native_thread_data.ubf_list;
if (list_empty((struct list_head*)node)) {
native_mutex_lock(&ubf_list_lock);
rb_native_mutex_lock(&ubf_list_lock);
list_add(&ubf_list_head, node);
native_mutex_unlock(&ubf_list_lock);
rb_native_mutex_unlock(&ubf_list_lock);
}
}
@ -1148,9 +1149,9 @@ unregister_ubf_list(rb_thread_t *th)
struct list_node *node = &th->native_thread_data.ubf_list;
if (!list_empty((struct list_head*)node)) {
native_mutex_lock(&ubf_list_lock);
rb_native_mutex_lock(&ubf_list_lock);
list_del_init(node);
native_mutex_unlock(&ubf_list_lock);
rb_native_mutex_unlock(&ubf_list_lock);
}
}
@ -1197,12 +1198,12 @@ ubf_wakeup_all_threads(void)
native_thread_data_t *dat;
if (!ubf_threads_empty()) {
native_mutex_lock(&ubf_list_lock);
rb_native_mutex_lock(&ubf_list_lock);
list_for_each(&ubf_list_head, dat, ubf_list) {
th = container_of(dat, rb_thread_t, native_thread_data);
ubf_wakeup_thread(th);
}
native_mutex_unlock(&ubf_list_lock);
rb_native_mutex_unlock(&ubf_list_lock);
}
}
@ -1535,9 +1536,9 @@ thread_timer(void *p)
#endif
#if !USE_SLEEPY_TIMER_THREAD
native_mutex_initialize(&timer_thread_lock);
native_cond_initialize(&timer_thread_cond, RB_CONDATTR_CLOCK_MONOTONIC);
native_mutex_lock(&timer_thread_lock);
rb_native_mutex_initialize(&timer_thread_lock);
rb_native_cond_initialize(&timer_thread_cond, RB_CONDATTR_CLOCK_MONOTONIC);
rb_native_mutex_lock(&timer_thread_lock);
#endif
while (system_working > 0) {
@ -1554,9 +1555,9 @@ thread_timer(void *p)
CLOSE_INVALIDATE(normal[0]);
CLOSE_INVALIDATE(low[0]);
#else
native_mutex_unlock(&timer_thread_lock);
native_cond_destroy(&timer_thread_cond);
native_mutex_destroy(&timer_thread_lock);
rb_native_mutex_unlock(&timer_thread_lock);
rb_native_cond_destroy(&timer_thread_cond);
rb_native_mutex_destroy(&timer_thread_lock);
#endif
if (TT_DEBUG) WRITE_CONST(2, "finish timer thread\n");
@ -1772,4 +1773,40 @@ rb_nativethread_self(void)
return pthread_self();
}
/* A function that wraps actual worker function, for pthread abstraction. */
static void *
mjit_worker(void *arg)
{
void (*worker_func)(void) = arg;
if (pthread_setcancelstate(PTHREAD_CANCEL_ENABLE, NULL) != 0) {
fprintf(stderr, "Cannot enable cancelation in MJIT worker\n");
}
#ifdef SET_CURRENT_THREAD_NAME
SET_CURRENT_THREAD_NAME("ruby-mjitworker"); /* 16 byte including NUL */
#endif
worker_func();
return NULL;
}
/* Launch MJIT thread. Returns FALSE if it fails to create thread. */
int
rb_thread_create_mjit_thread(void (*child_hook)(void), void (*worker_func)(void))
{
pthread_attr_t attr;
pthread_t worker_pid;
pthread_atfork(NULL, NULL, child_hook);
if (pthread_attr_init(&attr) == 0
&& pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM) == 0
&& pthread_create(&worker_pid, &attr, mjit_worker, worker_func) == 0) {
/* jit_worker thread is not to be joined */
pthread_detach(worker_pid);
return TRUE;
}
else {
return FALSE;
}
}
#endif /* THREAD_SYSTEM_DEPENDENT_IMPLEMENTATION */

View file

@ -24,8 +24,8 @@
static volatile DWORD ruby_native_thread_key = TLS_OUT_OF_INDEXES;
static int w32_wait_events(HANDLE *events, int count, DWORD timeout, rb_thread_t *th);
static void native_mutex_lock(rb_nativethread_lock_t *lock);
static void native_mutex_unlock(rb_nativethread_lock_t *lock);
void rb_native_mutex_lock(rb_nativethread_lock_t *lock);
void rb_native_mutex_unlock(rb_nativethread_lock_t *lock);
static void
w32_error(const char *func)
@ -54,7 +54,7 @@ w32_mutex_lock(HANDLE lock)
{
DWORD result;
while (1) {
thread_debug("native_mutex_lock: %p\n", lock);
thread_debug("rb_native_mutex_lock: %p\n", lock);
result = w32_wait_events(&lock, 1, INFINITE, 0);
switch (result) {
case WAIT_OBJECT_0:
@ -85,7 +85,7 @@ w32_mutex_create(void)
{
HANDLE lock = CreateMutex(NULL, FALSE, NULL);
if (lock == NULL) {
w32_error("native_mutex_initialize");
w32_error("rb_native_mutex_initialize");
}
return lock;
}
@ -280,10 +280,10 @@ native_sleep(rb_thread_t *th, struct timeval *tv)
{
DWORD ret;
native_mutex_lock(&th->interrupt_lock);
rb_native_mutex_lock(&th->interrupt_lock);
th->unblock.func = ubf_handle;
th->unblock.arg = th;
native_mutex_unlock(&th->interrupt_lock);
rb_native_mutex_unlock(&th->interrupt_lock);
if (RUBY_VM_INTERRUPTED(th->ec)) {
/* interrupted. return immediate */
@ -294,16 +294,16 @@ native_sleep(rb_thread_t *th, struct timeval *tv)
thread_debug("native_sleep done (%lu)\n", ret);
}
native_mutex_lock(&th->interrupt_lock);
rb_native_mutex_lock(&th->interrupt_lock);
th->unblock.func = 0;
th->unblock.arg = 0;
native_mutex_unlock(&th->interrupt_lock);
rb_native_mutex_unlock(&th->interrupt_lock);
}
GVL_UNLOCK_END();
}
static void
native_mutex_lock(rb_nativethread_lock_t *lock)
void
rb_native_mutex_lock(rb_nativethread_lock_t *lock)
{
#if USE_WIN32_MUTEX
w32_mutex_lock(lock->mutex);
@ -312,8 +312,8 @@ native_mutex_lock(rb_nativethread_lock_t *lock)
#endif
}
static void
native_mutex_unlock(rb_nativethread_lock_t *lock)
void
rb_native_mutex_unlock(rb_nativethread_lock_t *lock)
{
#if USE_WIN32_MUTEX
thread_debug("release mutex: %p\n", lock->mutex);
@ -343,8 +343,8 @@ native_mutex_trylock(rb_nativethread_lock_t *lock)
#endif
}
static void
native_mutex_initialize(rb_nativethread_lock_t *lock)
void
rb_native_mutex_initialize(rb_nativethread_lock_t *lock)
{
#if USE_WIN32_MUTEX
lock->mutex = w32_mutex_create();
@ -354,8 +354,8 @@ native_mutex_initialize(rb_nativethread_lock_t *lock)
#endif
}
static void
native_mutex_destroy(rb_nativethread_lock_t *lock)
void
rb_native_mutex_destroy(rb_nativethread_lock_t *lock)
{
#if USE_WIN32_MUTEX
w32_close_handle(lock->mutex);
@ -370,9 +370,8 @@ struct cond_event_entry {
HANDLE event;
};
#if 0
static void
native_cond_signal(rb_nativethread_cond_t *cond)
void
rb_native_cond_signal(rb_nativethread_cond_t *cond)
{
/* cond is guarded by mutex */
struct cond_event_entry *e = cond->next;
@ -390,8 +389,8 @@ native_cond_signal(rb_nativethread_cond_t *cond)
}
}
static void
native_cond_broadcast(rb_nativethread_cond_t *cond)
void
rb_native_cond_broadcast(rb_nativethread_cond_t *cond)
{
/* cond is guarded by mutex */
struct cond_event_entry *e = cond->next;
@ -426,14 +425,14 @@ native_cond_timedwait_ms(rb_nativethread_cond_t *cond, rb_nativethread_lock_t *m
head->prev->next = &entry;
head->prev = &entry;
native_mutex_unlock(mutex);
rb_native_mutex_unlock(mutex);
{
r = WaitForSingleObject(entry.event, msec);
if ((r != WAIT_OBJECT_0) && (r != WAIT_TIMEOUT)) {
rb_bug("native_cond_wait: WaitForSingleObject returns %lu", r);
rb_bug("rb_native_cond_wait: WaitForSingleObject returns %lu", r);
}
}
native_mutex_lock(mutex);
rb_native_mutex_lock(mutex);
entry.prev->next = entry.next;
entry.next->prev = entry.prev;
@ -442,12 +441,13 @@ native_cond_timedwait_ms(rb_nativethread_cond_t *cond, rb_nativethread_lock_t *m
return (r == WAIT_OBJECT_0) ? 0 : ETIMEDOUT;
}
static void
native_cond_wait(rb_nativethread_cond_t *cond, rb_nativethread_lock_t *mutex)
void
rb_native_cond_wait(rb_nativethread_cond_t *cond, rb_nativethread_lock_t *mutex)
{
native_cond_timedwait_ms(cond, mutex, INFINITE);
}
#if 0
static unsigned long
abs_timespec_to_timeout_ms(const struct timespec *ts)
{
@ -505,20 +505,20 @@ native_cond_timeout(rb_nativethread_cond_t *cond, struct timespec timeout_rel)
return timeout;
}
#endif
static void
native_cond_initialize(rb_nativethread_cond_t *cond, int flags)
void
rb_native_cond_initialize(rb_nativethread_cond_t *cond, int flags)
{
cond->next = (struct cond_event_entry *)cond;
cond->prev = (struct cond_event_entry *)cond;
}
static void
native_cond_destroy(rb_nativethread_cond_t *cond)
void
rb_native_cond_destroy(rb_nativethread_cond_t *cond)
{
/* */
}
#endif
void
ruby_init_stack(volatile VALUE *addr)
@ -777,4 +777,27 @@ native_set_thread_name(rb_thread_t *th)
{
}
static unsigned long __stdcall
mjit_worker(void *arg)
{
void (*worker_func)(void) = arg;
rb_w32_set_thread_description(GetCurrentThread(), L"ruby-mjitworker");
worker_func();
return 0;
}
/* Launch MJIT thread. Returns FALSE if it fails to create thread. */
int
rb_thread_create_mjit_thread(void (*child_hook)(void), void (*worker_func)(void))
{
size_t stack_size = 4 * 1024; /* 4KB is the minimum commit size */
HANDLE thread_id = w32_create_thread(stack_size, mjit_worker, worker_func);
if (thread_id == 0) {
return FALSE;
}
w32_resume_thread(thread_id);
return TRUE;
}
#endif /* THREAD_SYSTEM_DEPENDENT_IMPLEMENTATION */

22
vm.c
View file

@ -298,6 +298,7 @@ static VALUE vm_invoke_proc(rb_execution_context_t *ec, rb_proc_t *proc, VALUE s
static VALUE rb_block_param_proxy;
#include "mjit.h"
#include "vm_insnhelper.h"
#include "vm_exec.h"
#include "vm_insnhelper.c"
@ -1786,8 +1787,10 @@ vm_exec(rb_execution_context_t *ec)
_tag.retval = Qnil;
if ((state = EC_EXEC_TAG()) == TAG_NONE) {
result = mjit_exec(ec);
vm_loop_start:
result = vm_exec_core(ec, initial);
if (result == Qundef)
result = vm_exec_core(ec, initial);
VM_ASSERT(ec->tag == &_tag);
if ((state = _tag.state) != TAG_NONE) {
err = (struct vm_throw_data *)result;
@ -1870,6 +1873,7 @@ vm_exec(rb_execution_context_t *ec)
*ec->cfp->sp++ = THROW_DATA_VAL(err);
#endif
ec->errinfo = Qnil;
result = Qundef;
goto vm_loop_start;
}
}
@ -1909,6 +1913,7 @@ vm_exec(rb_execution_context_t *ec)
if (cfp == escape_cfp) {
cfp->pc = cfp->iseq->body->iseq_encoded + entry->cont;
ec->errinfo = Qnil;
result = Qundef;
goto vm_loop_start;
}
}
@ -1943,6 +1948,7 @@ vm_exec(rb_execution_context_t *ec)
}
ec->errinfo = Qnil;
VM_ASSERT(ec->tag->state == TAG_NONE);
result = Qundef;
goto vm_loop_start;
}
}
@ -1994,6 +2000,7 @@ vm_exec(rb_execution_context_t *ec)
state = 0;
ec->tag->state = TAG_NONE;
ec->errinfo = Qnil;
result = Qundef;
goto vm_loop_start;
}
else {
@ -2122,6 +2129,8 @@ rb_vm_mark(void *ptr)
rb_vm_trace_mark_event_hooks(&vm->event_hooks);
rb_gc_mark_values(RUBY_NSIG, vm->trap_list.cmd);
mjit_mark();
}
RUBY_MARK_LEAVE("vm");
@ -2742,6 +2751,12 @@ core_hash_merge_kwd(int argc, VALUE *argv)
return hash;
}
static VALUE
mjit_enabled_p(void)
{
return mjit_init_p ? Qtrue : Qfalse;
}
extern VALUE *rb_gc_stack_start;
extern size_t rb_gc_stack_maxsize;
#ifdef __ia64
@ -2795,6 +2810,7 @@ Init_VM(void)
VALUE opts;
VALUE klass;
VALUE fcore;
VALUE mjit;
/* ::RubyVM */
rb_cRubyVM = rb_define_class("RubyVM", rb_cObject);
@ -2826,6 +2842,10 @@ Init_VM(void)
rb_gc_register_mark_object(fcore);
rb_mRubyVMFrozenCore = fcore;
/* RubyVM::MJIT */
mjit = rb_define_module_under(rb_cRubyVM, "MJIT");
rb_define_singleton_method(mjit, "enabled?", mjit_enabled_p, 0);
/*
* Document-class: Thread
*

View file

@ -292,6 +292,9 @@ pathobj_realpath(VALUE pathobj)
}
}
/* A forward declaration */
struct rb_mjit_unit;
struct rb_iseq_constant_body {
enum iseq_type {
ISEQ_TYPE_TOP,
@ -414,6 +417,11 @@ struct rb_iseq_constant_body {
unsigned int ci_size;
unsigned int ci_kw_size;
unsigned int stack_max; /* for stack overflow check */
/* The following fields are MJIT related info. */
void *jit_func; /* function pointer for loaded native code */
long unsigned total_calls; /* number of total calls with `mjit_exec()` */
struct rb_mjit_unit *jit_unit;
};
/* T_IMEMO/iseq */

View file

@ -129,7 +129,7 @@ enum vm_regan_acttype {
#define CALL_METHOD(calling, ci, cc) do { \
VALUE v = (*(cc)->call)(ec, GET_CFP(), (calling), (ci), (cc)); \
if (v == Qundef) { \
if (v == Qundef && (v = mjit_exec(ec)) == Qundef) { \
RESTORE_REGS(); \
NEXT_INSN(); \
} \

View file

@ -276,7 +276,7 @@ LDSHARED_0 = @if exist $(@).manifest $(MINIRUBY) -run -e wait_writable -- -n 10
LDSHARED_1 = @if exist $(@).manifest $(MANIFESTTOOL) -manifest $(@).manifest -outputresource:$(@);2
LDSHARED_2 = @if exist $(@).manifest @$(RM) $(@:/=\).manifest
!endif
CPPFLAGS = $(DEFS) $(ARCHDEFS) $(CPPFLAGS)
CPPFLAGS = $(DEFS) $(ARCHDEFS) $(CPPFLAGS) -DMJIT_HEADER_BUILD_DIR=\""$(EXTOUT)/include/$(arch)"\" -DLIBRUBYARG_SHARED=\""$(LIBRUBYARG_SHARED)"\" -DLIBRUBY_LIBDIR=\""$(prefix)/lib"\" -DMJIT_HEADER_INSTALL_DIR=\""$(prefix)/include/$(RUBY_BASE_NAME)-$(ruby_version)/$(arch)"\"
DLDFLAGS = $(LDFLAGS) -dll
SOLIBS =