2020-04-10 01:11:40 -04:00
|
|
|
#ifndef RUBY_INSNHELPER_H
|
|
|
|
#define RUBY_INSNHELPER_H
|
2006-12-31 10:02:22 -05:00
|
|
|
/**********************************************************************
|
|
|
|
|
|
|
|
insnhelper.h - helper macros to implement each instructions
|
|
|
|
|
|
|
|
$Author$
|
|
|
|
created at: 04/01/01 15:50:34 JST
|
|
|
|
|
* blockinlining.c, compile.c, compile.h, debug.c, debug.h,
id.c, insnhelper.h, insns.def, thread.c, thread_pthread.ci,
thread_pthread.h, thread_win32.ci, thread_win32.h, vm.h,
vm_dump.c, vm_evalbody.ci, vm_opts.h: fix comments and
copyright year.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@13920 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2007-11-13 17:13:04 -05:00
|
|
|
Copyright (C) 2004-2007 Koichi Sasada
|
2006-12-31 10:02:22 -05:00
|
|
|
|
|
|
|
**********************************************************************/
|
|
|
|
|
2017-10-21 02:22:43 -04:00
|
|
|
RUBY_SYMBOL_EXPORT_BEGIN
|
|
|
|
|
2018-08-07 12:27:45 -04:00
|
|
|
RUBY_EXTERN VALUE ruby_vm_const_missing_count;
|
|
|
|
RUBY_EXTERN rb_serial_t ruby_vm_global_method_state;
|
|
|
|
RUBY_EXTERN rb_serial_t ruby_vm_global_constant_state;
|
|
|
|
RUBY_EXTERN rb_serial_t ruby_vm_class_serial;
|
* common.mk: clean up
- remove blockinlining.$(OBJEXT) to built
- make ENCODING_H_INCLDUES variable (include/ruby/encoding.h)
- make VM_CORE_H_INCLUDES variable (vm_core.h)
- simplify rules.
- make depends rule to output depend status using gcc -MM.
* include/ruby/mvm.h, include/ruby/vm.h: rename mvm.h to vm.h.
* include/ruby.h: ditto.
* load.c: add inclusion explicitly.
* enumerator.c, object.c, parse.y, thread.c, vm_dump.c:
remove useless inclusion.
* eval_intern.h: cleanup inclusion.
* vm_core.h: rb_thread_t should be defined in this file.
* vm_evalbody.c, vm_exec.c: rename vm_evalbody.c to vm_exec.c.
* vm.h, vm_exec.h: rename vm.h to vm_exec.h.
* insnhelper.h, vm_insnhelper.h: rename insnhelper.h to vm_insnhelper.h.
* vm.c, vm_insnhelper.c, vm_insnhelper.h:
- rename vm_eval() to vm_exec_core().
- rename vm_eval_body() to vm_exec().
- cleanup include order.
* vm_method.c: fix comment.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@19466 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2008-09-22 20:20:28 -04:00
|
|
|
|
2017-10-21 02:22:43 -04:00
|
|
|
RUBY_SYMBOL_EXPORT_END
|
|
|
|
|
2012-11-22 01:28:17 -05:00
|
|
|
#if VM_COLLECT_USAGE_DETAILS
|
2012-10-04 08:31:05 -04:00
|
|
|
#define COLLECT_USAGE_INSN(insn) vm_collect_usage_insn(insn)
|
|
|
|
#define COLLECT_USAGE_OPERAND(insn, n, op) vm_collect_usage_operand((insn), (n), ((VALUE)(op)))
|
* probes.d: add DTrace probe declarations. [ruby-core:27448]
* array.c (empty_ary_alloc, ary_new): added array create DTrace probe.
* compile.c (rb_insns_name): allowing DTrace probes to access
instruction sequence name.
* Makefile.in: translate probes.d file to appropriate header file.
* common.mk: declare dependencies on the DTrace header.
* configure.in: add a test for existence of DTrace.
* eval.c (setup_exception): add a probe for when an exception is
raised.
* gc.c: Add DTrace probes for mark begin and end, and sweep begin and
end.
* hash.c (empty_hash_alloc): Add a probe for hash allocation.
* insns.def: Add probes for function entry and return.
* internal.h: function declaration for compile.c change.
* load.c (rb_f_load): add probes for `load` entry and exit, require
entry and exit, and wrapping search_required for load path search.
* object.c (rb_obj_alloc): added a probe for general object creation.
* parse.y (yycompile0): added a probe around parse and compile phase.
* string.c (empty_str_alloc, str_new): DTrace probes for string
allocation.
* test/dtrace/*: tests for DTrace probes.
* vm.c (vm_invoke_proc): add probes for function return on exception
raise, hash create, and instruction sequence execution.
* vm_core.h: add probe declarations for function entry and exit.
* vm_dump.c: add probes header file.
* vm_eval.c (vm_call0_cfunc, vm_call0_cfunc_with_frame): add probe on
function entry and return.
* vm_exec.c: expose instruction number to instruction name function.
* vm_insnshelper.c: add function entry and exit probes for cfunc
methods.
* vm_insnhelper.h: vm usage information is always collected, so
uncomment the functions.
12 19:14:50 2012 Akinori MUSHA <knu@iDaemons.org>
* configure.in (isinf, isnan): isinf() and isnan() are macros on
DragonFly which cannot be found by AC_REPLACE_FUNCS(). This
workaround enforces the fact that they exist on DragonFly.
12 15:59:38 2012 Shugo Maeda <shugo@ruby-lang.org>
* vm_core.h (rb_call_info_t::refinements), compile.c (new_callinfo),
vm_insnhelper.c (vm_search_method): revert r37616 because it's too
slow. [ruby-dev:46477]
* test/ruby/test_refinement.rb (test_inline_method_cache): skip
the test until the bug is fixed efficiently.
git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/trunk@37631 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
2012-11-12 16:52:12 -05:00
|
|
|
|
2012-10-04 08:31:05 -04:00
|
|
|
#define COLLECT_USAGE_REGISTER(reg, s) vm_collect_usage_register((reg), (s))
|
|
|
|
#else
|
2012-11-22 01:28:17 -05:00
|
|
|
#define COLLECT_USAGE_INSN(insn) /* none */
|
|
|
|
#define COLLECT_USAGE_OPERAND(insn, n, op) /* none */
|
2012-10-04 08:31:05 -04:00
|
|
|
#define COLLECT_USAGE_REGISTER(reg, s) /* none */
|
|
|
|
#endif
|
2006-12-31 10:02:22 -05:00
|
|
|
|
|
|
|
/**********************************************************/
|
|
|
|
/* deal with stack */
|
|
|
|
/**********************************************************/
|
|
|
|
|
2018-06-27 06:36:49 -04:00
|
|
|
#define PUSH(x) (SET_SV(x), INC_SP(1))
|
2006-12-31 10:02:22 -05:00
|
|
|
#define TOPN(n) (*(GET_SP()-(n)-1))
|
2007-06-05 13:39:52 -04:00
|
|
|
#define POPN(n) (DEC_SP(n))
|
|
|
|
#define POP() (DEC_SP(1))
|
2007-06-01 00:05:46 -04:00
|
|
|
#define STACK_ADDR_FROM_TOP(n) (GET_SP()-(n))
|
2006-12-31 10:02:22 -05:00
|
|
|
|
|
|
|
/**********************************************************/
|
|
|
|
/* deal with registers */
|
|
|
|
/**********************************************************/
|
|
|
|
|
2016-11-05 12:31:25 -04:00
|
|
|
#define VM_REG_CFP (reg_cfp)
|
|
|
|
#define VM_REG_PC (VM_REG_CFP->pc)
|
|
|
|
#define VM_REG_SP (VM_REG_CFP->sp)
|
|
|
|
#define VM_REG_EP (VM_REG_CFP->ep)
|
2006-12-31 10:02:22 -05:00
|
|
|
|
2007-06-24 06:33:00 -04:00
|
|
|
#define RESTORE_REGS() do { \
|
2017-10-27 02:21:50 -04:00
|
|
|
VM_REG_CFP = ec->cfp; \
|
2007-06-24 06:33:00 -04:00
|
|
|
} while (0)
|
2006-12-31 10:02:22 -05:00
|
|
|
|
2018-12-27 20:06:04 -05:00
|
|
|
#if VM_COLLECT_USAGE_DETAILS
|
2012-06-10 23:14:59 -04:00
|
|
|
enum vm_regan_regtype {
|
|
|
|
VM_REGAN_PC = 0,
|
|
|
|
VM_REGAN_SP = 1,
|
|
|
|
VM_REGAN_EP = 2,
|
|
|
|
VM_REGAN_CFP = 3,
|
|
|
|
VM_REGAN_SELF = 4,
|
2018-01-02 01:41:46 -05:00
|
|
|
VM_REGAN_ISEQ = 5
|
2012-06-10 23:14:59 -04:00
|
|
|
};
|
|
|
|
enum vm_regan_acttype {
|
|
|
|
VM_REGAN_ACT_GET = 0,
|
2018-01-02 01:41:46 -05:00
|
|
|
VM_REGAN_ACT_SET = 1
|
2012-06-10 23:14:59 -04:00
|
|
|
};
|
|
|
|
|
2012-10-04 08:31:05 -04:00
|
|
|
#define COLLECT_USAGE_REGISTER_HELPER(a, b, v) \
|
|
|
|
(COLLECT_USAGE_REGISTER((VM_REGAN_##a), (VM_REGAN_ACT_##b)), (v))
|
2006-12-31 10:02:22 -05:00
|
|
|
#else
|
2012-10-04 08:31:05 -04:00
|
|
|
#define COLLECT_USAGE_REGISTER_HELPER(a, b, v) (v)
|
2006-12-31 10:02:22 -05:00
|
|
|
#endif
|
|
|
|
|
|
|
|
/* PC */
|
2016-11-05 12:31:25 -04:00
|
|
|
#define GET_PC() (COLLECT_USAGE_REGISTER_HELPER(PC, GET, VM_REG_PC))
|
|
|
|
#define SET_PC(x) (VM_REG_PC = (COLLECT_USAGE_REGISTER_HELPER(PC, SET, (x))))
|
2006-12-31 10:02:22 -05:00
|
|
|
#define GET_CURRENT_INSN() (*GET_PC())
|
|
|
|
#define GET_OPERAND(n) (GET_PC()[(n)])
|
2016-11-05 12:31:25 -04:00
|
|
|
#define ADD_PC(n) (SET_PC(VM_REG_PC + (n)))
|
2017-11-14 07:58:36 -05:00
|
|
|
#define JUMP(dst) (SET_PC(VM_REG_PC + (dst)))
|
2006-12-31 10:02:22 -05:00
|
|
|
|
2012-06-10 23:14:59 -04:00
|
|
|
/* frame pointer, environment pointer */
|
2016-11-05 12:31:25 -04:00
|
|
|
#define GET_CFP() (COLLECT_USAGE_REGISTER_HELPER(CFP, GET, VM_REG_CFP))
|
|
|
|
#define GET_EP() (COLLECT_USAGE_REGISTER_HELPER(EP, GET, VM_REG_EP))
|
|
|
|
#define SET_EP(x) (VM_REG_EP = (COLLECT_USAGE_REGISTER_HELPER(EP, SET, (x))))
|
2012-06-10 23:14:59 -04:00
|
|
|
#define GET_LEP() (VM_EP_LEP(GET_EP()))
|
2006-12-31 10:02:22 -05:00
|
|
|
|
|
|
|
/* SP */
|
2016-11-05 12:31:25 -04:00
|
|
|
#define GET_SP() (COLLECT_USAGE_REGISTER_HELPER(SP, GET, VM_REG_SP))
|
|
|
|
#define SET_SP(x) (VM_REG_SP = (COLLECT_USAGE_REGISTER_HELPER(SP, SET, (x))))
|
|
|
|
#define INC_SP(x) (VM_REG_SP += (COLLECT_USAGE_REGISTER_HELPER(SP, SET, (x))))
|
|
|
|
#define DEC_SP(x) (VM_REG_SP -= (COLLECT_USAGE_REGISTER_HELPER(SP, SET, (x))))
|
2006-12-31 10:02:22 -05:00
|
|
|
#define SET_SV(x) (*GET_SP() = (x))
|
2018-07-19 09:25:22 -04:00
|
|
|
/* set current stack value as x */
|
2006-12-31 10:02:22 -05:00
|
|
|
|
|
|
|
/* instruction sequence C struct */
|
|
|
|
#define GET_ISEQ() (GET_CFP()->iseq)
|
|
|
|
|
|
|
|
/**********************************************************/
|
|
|
|
/* deal with variables */
|
|
|
|
/**********************************************************/
|
|
|
|
|
2016-07-28 07:02:30 -04:00
|
|
|
#define GET_PREV_EP(ep) ((VALUE *)((ep)[VM_ENV_DATA_INDEX_SPECVAL] & ~0x03))
|
2006-12-31 10:02:22 -05:00
|
|
|
|
|
|
|
/**********************************************************/
|
|
|
|
/* deal with values */
|
|
|
|
/**********************************************************/
|
|
|
|
|
2012-10-04 08:31:05 -04:00
|
|
|
#define GET_SELF() (COLLECT_USAGE_REGISTER_HELPER(SELF, GET, GET_CFP()->self))
|
2006-12-31 10:02:22 -05:00
|
|
|
|
|
|
|
/**********************************************************/
|
|
|
|
/* deal with control flow 2: method/iterator */
|
|
|
|
/**********************************************************/
|
|
|
|
|
2012-10-14 16:59:21 -04:00
|
|
|
/* set fastpath when cached method is *NOT* protected
|
|
|
|
* because inline method cache does not care about receiver.
|
|
|
|
*/
|
2012-10-15 13:40:50 -04:00
|
|
|
|
2019-12-16 23:22:24 -05:00
|
|
|
static inline void
|
2020-01-08 02:14:01 -05:00
|
|
|
CC_SET_FASTPATH(const struct rb_callcache *cc, vm_call_handler func, bool enabled)
|
2019-12-16 23:22:24 -05:00
|
|
|
{
|
|
|
|
if (LIKELY(enabled)) {
|
2020-01-08 02:14:01 -05:00
|
|
|
vm_cc_call_set(cc, func);
|
2019-12-16 23:22:24 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-07-28 07:02:30 -04:00
|
|
|
#define GET_BLOCK_HANDLER() (GET_LEP()[VM_ENV_DATA_INDEX_SPECVAL])
|
2006-12-31 10:02:22 -05:00
|
|
|
|
|
|
|
/**********************************************************/
|
|
|
|
/* deal with control flow 3: exception */
|
|
|
|
/**********************************************************/
|
|
|
|
|
|
|
|
|
2018-09-12 23:46:46 -04:00
|
|
|
/**********************************************************/
|
|
|
|
/* deal with stack canary */
|
|
|
|
/**********************************************************/
|
|
|
|
|
|
|
|
#if VM_CHECK_MODE > 0
|
|
|
|
#define SETUP_CANARY() \
|
2019-02-01 02:26:39 -05:00
|
|
|
VALUE *canary; \
|
2018-09-14 03:44:44 -04:00
|
|
|
if (leaf) { \
|
2018-09-12 23:46:46 -04:00
|
|
|
canary = GET_SP(); \
|
|
|
|
SET_SV(vm_stack_canary); \
|
2019-02-01 02:26:39 -05:00
|
|
|
} \
|
|
|
|
else {\
|
|
|
|
SET_SV(Qfalse); /* cleanup */ \
|
2018-09-12 23:46:46 -04:00
|
|
|
}
|
|
|
|
#define CHECK_CANARY() \
|
2019-02-01 02:26:39 -05:00
|
|
|
if (leaf) { \
|
|
|
|
if (*canary == vm_stack_canary) { \
|
|
|
|
*canary = Qfalse; /* cleanup */ \
|
|
|
|
} \
|
|
|
|
else { \
|
|
|
|
vm_canary_is_found_dead(INSN_ATTR(bin), *canary); \
|
|
|
|
} \
|
2018-09-12 23:46:46 -04:00
|
|
|
}
|
|
|
|
#else
|
|
|
|
#define SETUP_CANARY() /* void */
|
|
|
|
#define CHECK_CANARY() /* void */
|
|
|
|
#endif
|
|
|
|
|
2006-12-31 10:02:22 -05:00
|
|
|
/**********************************************************/
|
|
|
|
/* others */
|
|
|
|
/**********************************************************/
|
|
|
|
|
2018-09-14 03:44:44 -04:00
|
|
|
#ifndef MJIT_HEADER
|
|
|
|
#define CALL_SIMPLE_METHOD() do { \
|
|
|
|
rb_snum_t x = leaf ? INSN_ATTR(width) : 0; \
|
2019-07-30 21:36:05 -04:00
|
|
|
rb_snum_t y = attr_width_opt_send_without_block(0); \
|
2018-09-14 03:44:44 -04:00
|
|
|
rb_snum_t z = x - y; \
|
|
|
|
ADD_PC(z); \
|
|
|
|
DISPATCH_ORIGINAL_INSN(opt_send_without_block); \
|
|
|
|
} while (0)
|
|
|
|
#endif
|
|
|
|
|
2019-12-12 19:42:34 -05:00
|
|
|
#define PREV_CLASS_SERIAL() (ruby_vm_class_serial)
|
2013-11-08 22:34:49 -05:00
|
|
|
#define NEXT_CLASS_SERIAL() (++ruby_vm_class_serial)
|
2013-12-09 05:51:02 -05:00
|
|
|
#define GET_GLOBAL_METHOD_STATE() (ruby_vm_global_method_state)
|
|
|
|
#define INC_GLOBAL_METHOD_STATE() (++ruby_vm_global_method_state)
|
|
|
|
#define GET_GLOBAL_CONSTANT_STATE() (ruby_vm_global_constant_state)
|
|
|
|
#define INC_GLOBAL_CONSTANT_STATE() (++ruby_vm_global_constant_state)
|
2011-06-13 07:25:44 -04:00
|
|
|
|
2015-03-11 08:49:27 -04:00
|
|
|
static inline struct vm_throw_data *
|
2019-07-25 04:15:48 -04:00
|
|
|
THROW_DATA_NEW(VALUE val, const rb_control_frame_t *cf, int st)
|
2015-03-10 14:39:46 -04:00
|
|
|
{
|
2019-07-25 04:15:48 -04:00
|
|
|
struct vm_throw_data *obj = (struct vm_throw_data *)rb_imemo_new(imemo_throw_data, val, (VALUE)cf, 0, 0);
|
|
|
|
obj->throw_state = st;
|
|
|
|
return obj;
|
2015-03-10 14:39:46 -04:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline VALUE
|
2015-03-11 08:49:27 -04:00
|
|
|
THROW_DATA_VAL(const struct vm_throw_data *obj)
|
2015-03-10 14:39:46 -04:00
|
|
|
{
|
2017-04-07 03:50:30 -04:00
|
|
|
VM_ASSERT(THROW_DATA_P(obj));
|
2015-03-10 14:39:46 -04:00
|
|
|
return obj->throw_obj;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline const rb_control_frame_t *
|
2015-03-11 08:49:27 -04:00
|
|
|
THROW_DATA_CATCH_FRAME(const struct vm_throw_data *obj)
|
2015-03-10 14:39:46 -04:00
|
|
|
{
|
2017-04-07 03:50:30 -04:00
|
|
|
VM_ASSERT(THROW_DATA_P(obj));
|
2015-03-10 14:39:46 -04:00
|
|
|
return obj->catch_frame;
|
|
|
|
}
|
|
|
|
|
2017-04-05 22:56:23 -04:00
|
|
|
static inline int
|
2015-03-11 08:49:27 -04:00
|
|
|
THROW_DATA_STATE(const struct vm_throw_data *obj)
|
2015-03-10 14:39:46 -04:00
|
|
|
{
|
2017-04-07 03:50:30 -04:00
|
|
|
VM_ASSERT(THROW_DATA_P(obj));
|
2019-07-22 04:44:58 -04:00
|
|
|
return obj->throw_state;
|
2015-03-10 14:39:46 -04:00
|
|
|
}
|
|
|
|
|
2017-04-05 22:56:23 -04:00
|
|
|
static inline int
|
|
|
|
THROW_DATA_CONSUMED_P(const struct vm_throw_data *obj)
|
|
|
|
{
|
|
|
|
VM_ASSERT(THROW_DATA_P(obj));
|
|
|
|
return obj->flags & THROW_DATA_CONSUMED;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
THROW_DATA_CATCH_FRAME_SET(struct vm_throw_data *obj, const rb_control_frame_t *cfp)
|
|
|
|
{
|
2017-04-07 03:50:30 -04:00
|
|
|
VM_ASSERT(THROW_DATA_P(obj));
|
2017-04-05 22:56:23 -04:00
|
|
|
obj->catch_frame = cfp;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
THROW_DATA_STATE_SET(struct vm_throw_data *obj, int st)
|
|
|
|
{
|
2017-04-07 03:50:30 -04:00
|
|
|
VM_ASSERT(THROW_DATA_P(obj));
|
2019-07-22 04:44:58 -04:00
|
|
|
obj->throw_state = st;
|
2017-04-05 22:56:23 -04:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void
|
|
|
|
THROW_DATA_CONSUMED_SET(struct vm_throw_data *obj)
|
|
|
|
{
|
|
|
|
if (THROW_DATA_P(obj) &&
|
|
|
|
THROW_DATA_STATE(obj) == TAG_BREAK) {
|
|
|
|
obj->flags |= THROW_DATA_CONSUMED;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
VALUE size packed callinfo (ci).
Now, rb_call_info contains how to call the method with tuple of
(mid, orig_argc, flags, kwarg). Most of cases, kwarg == NULL and
mid+argc+flags only requires 64bits. So this patch packed
rb_call_info to VALUE (1 word) on such cases. If we can not
represent it in VALUE, then use imemo_callinfo which contains
conventional callinfo (rb_callinfo, renamed from rb_call_info).
iseq->body->ci_kw_size is removed because all of callinfo is VALUE
size (packed ci or a pointer to imemo_callinfo).
To access ci information, we need to use these functions:
vm_ci_mid(ci), _flag(ci), _argc(ci), _kwarg(ci).
struct rb_call_info_kw_arg is renamed to rb_callinfo_kwarg.
rb_funcallv_with_cc() and rb_method_basic_definition_p_with_cc()
is temporary removed because cd->ci should be marked.
2020-01-07 18:20:36 -05:00
|
|
|
#define IS_ARGS_SPLAT(ci) (vm_ci_flag(ci) & VM_CALL_ARGS_SPLAT)
|
|
|
|
#define IS_ARGS_KEYWORD(ci) (vm_ci_flag(ci) & VM_CALL_KWARG)
|
|
|
|
#define IS_ARGS_KW_SPLAT(ci) (vm_ci_flag(ci) & VM_CALL_KW_SPLAT)
|
|
|
|
#define IS_ARGS_KW_OR_KW_SPLAT(ci) (vm_ci_flag(ci) & (VM_CALL_KWARG | VM_CALL_KW_SPLAT))
|
Reduce allocations for keyword argument hashes
Previously, passing a keyword splat to a method always allocated
a hash on the caller side, and accepting arbitrary keywords in
a method allocated a separate hash on the callee side. Passing
explicit keywords to a method that accepted a keyword splat
did not allocate a hash on the caller side, but resulted in two
hashes allocated on the callee side.
This commit makes passing a single keyword splat to a method not
allocate a hash on the caller side. Passing multiple keyword
splats or a mix of explicit keywords and a keyword splat still
generates a hash on the caller side. On the callee side,
if arbitrary keywords are not accepted, it does not allocate a
hash. If arbitrary keywords are accepted, it will allocate a
hash, but this commit uses a callinfo flag to indicate whether
the caller already allocated a hash, and if so, the callee can
use the passed hash without duplicating it. So this commit
should make it so that a maximum of a single hash is allocated
during method calls.
To set the callinfo flag appropriately, method call argument
compilation checks if only a single keyword splat is given.
If only one keyword splat is given, the VM_CALL_KW_SPLAT_MUT
callinfo flag is not set, since in that case the keyword
splat is passed directly and not mutable. If more than one
splat is used, a new hash needs to be generated on the caller
side, and in that case the callinfo flag is set, indicating
the keyword splat is mutable by the callee.
In compile_hash, used for both hash and keyword argument
compilation, if compiling keyword arguments and only a
single keyword splat is used, pass the argument directly.
On the caller side, in vm_args.c, the callinfo flag needs to
be recognized and handled. Because the keyword splat
argument may not be a hash, it needs to be converted to a
hash first if not. Then, unless the callinfo flag is set,
the hash needs to be duplicated. The temporary copy of the
callinfo flag, kw_flag, is updated if a hash was duplicated,
to prevent the need to duplicate it again. If we are
converting to a hash or duplicating a hash, we need to update
the argument array, which can including duplicating the
positional splat array if one was passed. CALLER_SETUP_ARG
and a couple other places needs to be modified to handle
similar issues for other types of calls.
This includes fairly comprehensive tests for different ways
keywords are handled internally, checking that you get equal
results but that keyword splats on the caller side result in
distinct objects for keyword rest parameters.
Included are benchmarks for keyword argument calls.
Brief results when compiled without optimization:
def kw(a: 1) a end
def kws(**kw) kw end
h = {a: 1}
kw(a: 1) # about same
kw(**h) # 2.37x faster
kws(a: 1) # 1.30x faster
kws(**h) # 2.19x faster
kw(a: 1, **h) # 1.03x slower
kw(**h, **h) # about same
kws(a: 1, **h) # 1.16x faster
kws(**h, **h) # 1.14x faster
2020-02-24 15:05:07 -05:00
|
|
|
#define IS_ARGS_KW_SPLAT_MUT(ci) (vm_ci_flag(ci) & VM_CALL_KW_SPLAT_MUT)
|
2018-02-06 09:07:57 -05:00
|
|
|
|
2019-03-21 02:25:09 -04:00
|
|
|
/* If this returns true, an optimized function returned by `vm_call_iseq_setup_func`
|
|
|
|
can be used as a fastpath. */
|
|
|
|
static bool
|
2020-01-08 02:14:01 -05:00
|
|
|
vm_call_iseq_optimizable_p(const struct rb_callinfo *ci, const struct rb_callcache *cc)
|
2019-03-21 02:25:09 -04:00
|
|
|
{
|
|
|
|
return !IS_ARGS_SPLAT(ci) && !IS_ARGS_KEYWORD(ci) &&
|
2020-01-08 02:14:01 -05:00
|
|
|
!(METHOD_ENTRY_VISI(vm_cc_cme(cc)) == METHOD_VISI_PROTECTED);
|
2019-03-21 02:25:09 -04:00
|
|
|
}
|
|
|
|
|
2008-01-18 03:56:11 -05:00
|
|
|
#endif /* RUBY_INSNHELPER_H */
|