PR 117812 reports that testing GCC with zstd 1.3.4 fails because
ZSTD_CLEVEL_DEFAULT is not defined, so avoid using it.
PR libbacktrace/117812
* zstdtest.c (test_large): Use 3 rather than ZSTD_CLEVEL_DEFAULT
Thank you for the feedback. I have made the minor changes that were requested.
Additionally, I extracted the repetitive code into a reusable helper function,
match_plus_neg_pattern, making the code much more readable. Furthermore, the
logic, code, and tests remain the same as in version 2 of the patch.
gcc/ChangeLog:
* match.pd: New pattern.
* simplify-rtx.cc (match_plus_neg_pattern): New helper function.
(simplify_context::simplify_binary_operation_1): New
code to handle (a - 1) & -a, (a - 1) | -a and (a - 1) ^ -a.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/bitops-11.c: New test.
The BIT_FIELD_REF verifier has:
if (INTEGRAL_TYPE_P (TREE_TYPE (op))
&& !type_has_mode_precision_p (TREE_TYPE (op)))
{
error ("%qs of non-mode-precision operand", code_name);
return true;
}
check among other things, so one can't extract something out of say
_BitInt(63) or _BitInt(4096).
The new ifcombine optimization happily creates such BIT_FIELD_REFs
and ICEs during their verification.
The following patch fixes that by rejecting those in decode_field_reference.
2024-12-14 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/118023
* gimple-fold.cc (decode_field_reference): Return NULL_TREE if
inner has non-type_has_mode_precision_p integral type.
* gcc.dg/bitint-119.c: New test.
The following testcase ICEs because of a bug in matching_alloc_calls_p.
The loop was apparently meant to be walking the two attribute chains
in lock-step, but doesn't really do that. If the first lookup_attribute
returns non-NULL, the second one is not done, so rmats in that case can
be some random unrelated attribute rather than "malloc" attribute; the
body assumes even rmats if non-NULL is "malloc" attribute and relies
on its argument to be a "malloc" argument and if it is some other
attribute with incompatible attribute, it just crashes.
Now, fixing that in the obvious way, instead of doing
(amats = lookup_attribute ("malloc", amats))
|| (rmats = lookup_attribute ("malloc", rmats))
in the condition do
((amats = lookup_attribute ("malloc", amats)),
(rmats = lookup_attribute ("malloc", rmats)),
(amats || rmats))
fixes the testcase but regresses Wmismatched-dealloc-{2,3}.c tests.
The problem is that walking the attribute lists in a lock-step is obviously
a very bad idea, there is no requirement that the same deallocators are
present in the same order on both decls, e.g. there could be an extra malloc
attribute without argument in just one of the lists, or the order of say
free/realloc could be swapped, etc. We don't generally document nor enforce
any particular ordering of attributes (even when for some attributes we just
handle the first one rather than all).
So, this patch instead simply splits it into two loops, the first one walks
alloc_decl attributes, the second one walks dealloc_decl attributes.
If the malloc attribute argument is a built-in, that doesn't change
anything, and otherwise we have the chance to populate the whole
common_deallocs hash_set in the first loop and then can check it in the
second one (and don't need to use more expensive add method on it, can just
check contains there). Not to mention that it also fixes the case when
the function would incorrectly return true if there wasn't a common
deallocator between the two, but dealloc_decl had 2 malloc attributes with
the same deallocator.
2024-12-14 Jakub Jelinek <jakub@redhat.com>
PR middle-end/118024
* gimple-ssa-warn-access.cc (matching_alloc_calls_p): Walk malloc
attributes of alloc_decl and dealloc_decl in separate loops rather
than in lock-step. Use common_deallocs.contains rather than
common_deallocs.add in the second loop.
* gcc.dg/pr118024.c: New test.
The magic values for default (usually -1 or sometimes 2) for some options
are from times we haven't global_options_set, I think we should eventually
get rid of all of those.
The PR is about gcc -Q --help=optimizers reporting -fshort-enums as
[enabled] when it is disabled.
For this the following patch is just partial fix; with explicit
gcc -Q --help=optimizers -fshort-enums
or
gcc -Q --help=optimizers -fno-short-enums
it already worked correctly before, with this patch it will report
even with just
gcc -Q --help=optimizers
correct value on most targets, except 32-bit arm with some options or
defaults, so I think it is a step in the right direction.
But, as I wrote in the PR, process_options isn't done before --help=
and even shouldn't be in its current form where it warns on some option
combinations or errors or emits sorry on others, so I think ideally
process_options should have some bool argument whether it is done for
--help= purposes or not, if yes, not emit warnings and just adjust the
options, otherwise do what it currently does.
2024-12-14 Jakub Jelinek <jakub@redhat.com>
PR c/118011
gcc/
* opts.cc (init_options_struct): Don't set opts->x_flag_short_enums to
2.
* toplev.cc (process_options): Test !OPTION_SET_P (flag_short_enums)
rather than flag_short_enums == 2.
gcc/ada/
* gcc-interface/misc.cc (gnat_post_options): Test
!OPTION_SET_P (flag_short_enums) rather than flag_short_enums == 2.
Decomposition of lambda closure types is not allowed by
[dcl.struct.bind] p6, since members of a closure have no name.
r244909 made this an error, but missed the case where a lambda is used
as a base. This patch moves the check to find_decomp_class_base to
handle this case.
As a drive-by improvement, we also slightly improve the diagnostics to
indicate why a base class was being inspected. Ideally the diagnostic
would point directly at the relevant base, but there doesn't seem to be
an easy way to get this location just from the binfo so I don't worry
about that here.
PR c++/90321
gcc/cp/ChangeLog:
* decl.cc (find_decomp_class_base): Check for decomposing a
lambda closure type. Report base class chains if needed.
(cp_finish_decomp): Remove no-longer-needed check.
gcc/testsuite/ChangeLog:
* g++.dg/cpp1z/decomp62.C: New test.
Signed-off-by: Nathaniel Shead <nathanieloshead@gmail.com>
Reviewed-by: Marek Polacek <polacek@redhat.com>
The following testcase is miscompiled on s390x-linux with -O2 -march=z15.
The problem happens during cse2, which sees in an extended basic block
(jump_insn 217 78 216 10 (parallel [
(set (pc)
(if_then_else (ne (reg:SI 165)
(const_int 1 [0x1]))
(label_ref 216)
(pc)))
(set (reg:SI 165)
(plus:SI (reg:SI 165)
(const_int -1 [0xffffffffffffffff])))
(clobber (scratch:SI))
(clobber (reg:CC 33 %cc))
]) "t.c":14:17 discrim 1 2192 {doloop_si64}
(int_list:REG_BR_PROB 955630228 (nil))
-> 216)
...
(insn 99 98 100 12 (set (reg:SI 138)
(const_int 1 [0x1])) "t.c":9:31 1507 {*movsi_zarch}
(nil))
(insn 100 99 103 12 (parallel [
(set (reg:SI 137)
(minus:SI (reg:SI 138)
(subreg:SI (reg:HI 135 [ a ]) 0)))
(clobber (reg:CC 33 %cc))
]) "t.c":9:31 1904 {*subsi3}
(expr_list:REG_DEAD (reg:SI 138)
(expr_list:REG_DEAD (reg:HI 135 [ a ])
(expr_list:REG_UNUSED (reg:CC 33 %cc)
(nil)))))
Note, cse2 has df_note_add_problem () before df_analyze, which add
(expr_list:REG_UNUSED (reg:SI 165)
(expr_list:REG_UNUSED (reg:CC 33 %cc)
notes to the first insn (correctly so, %cc is clobbered there and pseudo
165 isn't used after the insn).
Now, cse_extended_basic_block has an extra optimization on conditional
jumps, where it records equivalence on the edge which continues in the ebb.
Here it sees (ne reg:SI 165) (const_int 1) is false on the edge and
remembers that pseudo 165 is comparison equivalent to (const_int 1),
so on insn 100 it decides to replace (reg:SI 138) with (reg:SI 165).
This optimization isn't correct here though, because the JUMP_INSN has
multiple sets. Before r0-77890 record_jump_equiv has been called from
cse_insn guarded on n_sets == 1 && any_condjump_p (insn), so it wouldn't
be done on the above JUMP_INSN where n_sets == 2. But since that change
it is guarded with single_set (insn) && any_condjump_p (insn) and that
is true because of the REG_UNUSED note. Looking at that note is
inappropriate in CSE though, because the whole intent of the pass is to
extend the lifetimes of the pseudos if equivalence is found, so the fact
that there is REG_UNUSED note for (reg:SI 165) and that the reg isn't used
later doesn't imply that it won't be used after the optimization.
So, unless we manage to process the other sets on the JUMP_INSN (it wouldn't
be terribly hard in this exact case, the doloop insn decreases the register
by 1 and so we could just record equivalence to (const_int 0) instead, but
generally it might be hard), we should IMHO just punt if there are multiple
sets.
The patch below adds !multiple_sets (insn) check instead of replacing with
it the single_set (insn) check, because apparently any_condjump_p uses
pc_set which supports the case where PATTERN is a SET to PC (that is a
single_set (insn) && !multiple_sets (insn), PATTERN is a PARALLEL with a
single SET to PC (likewise) and some CLOBBERs, PARALLEL with two or more
SETs where the first one is SET to PC (that could be single_set (insn)
with REG_UNUSED notes but is not !multiple_sets (insn)) or PATTERN
is UNSPEC/UNSPEC_VOLATILE with SET inside of it. For the last case
!multiple_sets (insn) will be true, but IMHO we shouldn't try to derive
anything from those because we haven't checked the rest of the UNSPEC*
and we don't really know what it does.
2024-12-13 Jakub Jelinek <jakub@redhat.com>
PR rtl-optimization/117095
* cse.cc (cse_extended_basic_block): Don't call record_jump_equiv
if multiple_sets (insn).
* gcc.c-torture/execute/pr117095.c: New test.
Use a local reference for the (now possibly lifetime extended) result of
*__first so that we copy it only when necessary.
PR libstdc++/112349
libstdc++-v3/ChangeLog:
* include/bits/ranges_algo.h (__min_fn::operator()): Turn local
object __tmp into a reference.
* include/bits/ranges_util.h (__max_fn::operator()): Likewise.
* testsuite/25_algorithms/max/constrained.cc (test04): New test.
* testsuite/25_algorithms/min/constrained.cc (test04): New test.
Reviewed-by: Jonathan Wakely <jwakely@redhat.com>
In this PR, we have to handle a case where MVE predicates are supplied
as a const_int, where individual predicates have illegal boolean
values (such as 0xc for a 4-bit boolean predicate). To avoid the ICE,
fix the constant (any non-zero value is converted to all 1s) and emit
a warning.
On MVE, V8BI and V4BI multi-bit masks are interpreted byte-by-byte at
instruction level, but end-users should describe lanes rather than
bytes (so all bytes of a true-predicated lane should be '1'), see the
section on MVE intrinsics in the Arm ACLE specification.
Since force_lowpart_subreg cannot handle const_int (because they have VOID mode),
use gen_lowpart on them, force_lowpart_subreg otherwise.
2024-11-20 Christophe Lyon <christophe.lyon@linaro.org>
Jakub Jelinek <jakub@redhat.com>
PR target/114801
gcc/
* config/arm/arm-mve-builtins.cc
(function_expander::add_input_operand): Handle CONST_INT
predicates.
gcc/testsuite/
* gcc.target/arm/mve/pr108443.c: Update predicate constant.
* gcc.target/arm/mve/pr108443-run.c: Likewise.
* gcc.target/arm/mve/pr114801.c: New test.
Implement vst2q, vst4q, vld2q and vld4q using the new MVE builtins
framework.
Since MVE uses different tuple modes than Neon, we need to use
VALID_MVE_STRUCT_MODE because VALID_NEON_STRUCT_MODE is no longer a
super-set of it, for instance in output_move_neon and
arm_print_operand_address.
In arm_hard_regno_mode_ok, the change is similar but a bit more
intrusive.
Expand the VSTRUCT iterator, so that mov<mode> and neon_mov<mode>
patterns from neon.md still work for MVE.
Besides the small updates to the patterns in mve.md, we have to update
vec_load_lanes and vec_store_lanes in vec-common.md so that the
vectorizer can handle the new modes. These patterns are now different
from Neon's, so maybe we should move them back to neon.md and mve.md
The patch adds arm_array_mode, which is used by build_array_type_nelts
and makes it possible to support the new assert in
register_builtin_tuple_types.
gcc/ChangeLog:
* config/arm/arm-mve-builtins-base.cc (class vst24_impl): New.
(class vld24_impl): New.
(vld2q, vld4q, vst2q, vst4q): New.
* config/arm/arm-mve-builtins-base.def (vld2q, vld4q, vst2q)
(vst4q): New.
* config/arm/arm-mve-builtins-base.h (vld2q, vld4q, vst2q, vst4q):
New.
* config/arm/arm-mve-builtins.cc (register_builtin_tuple_types):
Add more asserts.
* config/arm/arm.cc (TARGET_ARRAY_MODE): New.
(output_move_neon): Handle MVE struct modes.
(arm_print_operand_address): Likewise.
(arm_hard_regno_mode_ok): Likewise.
(arm_array_mode): New.
* config/arm/arm.h (VALID_MVE_STRUCT_MODE): Likewise.
* config/arm/arm_mve.h (vst4q): Delete.
(vst2q): Delete.
(vld2q): Delete.
(vld4q): Delete.
(vst4q_s8): Delete.
(vst4q_s16): Delete.
(vst4q_s32): Delete.
(vst4q_u8): Delete.
(vst4q_u16): Delete.
(vst4q_u32): Delete.
(vst4q_f16): Delete.
(vst4q_f32): Delete.
(vst2q_s8): Delete.
(vst2q_u8): Delete.
(vld2q_s8): Delete.
(vld2q_u8): Delete.
(vld4q_s8): Delete.
(vld4q_u8): Delete.
(vst2q_s16): Delete.
(vst2q_u16): Delete.
(vld2q_s16): Delete.
(vld2q_u16): Delete.
(vld4q_s16): Delete.
(vld4q_u16): Delete.
(vst2q_s32): Delete.
(vst2q_u32): Delete.
(vld2q_s32): Delete.
(vld2q_u32): Delete.
(vld4q_s32): Delete.
(vld4q_u32): Delete.
(vld4q_f16): Delete.
(vld2q_f16): Delete.
(vst2q_f16): Delete.
(vld4q_f32): Delete.
(vld2q_f32): Delete.
(vst2q_f32): Delete.
(__arm_vst4q_s8): Delete.
(__arm_vst4q_s16): Delete.
(__arm_vst4q_s32): Delete.
(__arm_vst4q_u8): Delete.
(__arm_vst4q_u16): Delete.
(__arm_vst4q_u32): Delete.
(__arm_vst2q_s8): Delete.
(__arm_vst2q_u8): Delete.
(__arm_vld2q_s8): Delete.
(__arm_vld2q_u8): Delete.
(__arm_vld4q_s8): Delete.
(__arm_vld4q_u8): Delete.
(__arm_vst2q_s16): Delete.
(__arm_vst2q_u16): Delete.
(__arm_vld2q_s16): Delete.
(__arm_vld2q_u16): Delete.
(__arm_vld4q_s16): Delete.
(__arm_vld4q_u16): Delete.
(__arm_vst2q_s32): Delete.
(__arm_vst2q_u32): Delete.
(__arm_vld2q_s32): Delete.
(__arm_vld2q_u32): Delete.
(__arm_vld4q_s32): Delete.
(__arm_vld4q_u32): Delete.
(__arm_vst4q_f16): Delete.
(__arm_vst4q_f32): Delete.
(__arm_vld4q_f16): Delete.
(__arm_vld2q_f16): Delete.
(__arm_vst2q_f16): Delete.
(__arm_vld4q_f32): Delete.
(__arm_vld2q_f32): Delete.
(__arm_vst2q_f32): Delete.
(__arm_vst4q): Delete.
(__arm_vst2q): Delete.
(__arm_vld2q): Delete.
(__arm_vld4q): Delete.
* config/arm/arm_mve_builtins.def (vst4q, vst2q, vld4q, vld2q):
Delete.
* config/arm/iterators.md (VSTRUCT): Add V2x16QI, V2x8HI, V2x4SI,
V2x8HF, V2x4SF, V4x16QI, V4x8HI, V4x4SI, V4x8HF, V4x4SF.
(MVE_VLD2_VST2, MVE_vld2_vst2, MVE_VLD4_VST4, MVE_vld4_vst4): New.
* config/arm/mve.md (mve_vst4q<mode>): Update into ...
(@mve_vst4q<mode>): ... this.
(mve_vst2q<mode>): Update into ...
(@mve_vst2q<mode>): ... this.
(mve_vld2q<mode>): Update into ...
(@mve_vld2q<mode>): ... this.
(mve_vld4q<mode>): Update into ...
(@mve_vld4q<mode>): ... this.
* config/arm/vec-common.md (vec_load_lanesoi<mode>) Remove MVE
support.
(vec_load_lanesxi<mode>): Likewise.
(vec_store_lanesoi<mode>): Likewise.
(vec_store_lanesxi<mode>): Likewise.
(vec_load_lanes<MVE_vld2_vst2><mode>):
New.
(vec_store_lanes<MVE_vld2_vst2><mode>): New.
(vec_load_lanes<MVE_vld4_vst4><mode>): New.
(vec_store_lanes<MVE_vld4_vst4><mode>): New.
Now that tuples are properly supported, we can update the store shape, to expect
"t0" instead of "v0" as last argument.
gcc/ChangeLog:
* config/arm/arm-mve-builtins-shapes.cc (struct store_def): Add
support for tuples.
This patch is largely a copy/paste from the aarch64 SVE counterpart,
and adds support for tuples to the MVE intrinsics framework.
Introduce function_resolver::infer_tuple_type which will be used to
resolve overloaded vst2q and vst4q function names in a later patch.
Fix access to acle_vector_types in a few places, as well as in
infer_vector_or_tuple_type because we should shift the tuple size to
the right by one bit when computing the array index.
The new wrap_type_in_struct, register_type_decl and infer_tuple_type
are largely copies of the aarch64 versions, and
register_builtin_tuple_types is very similar.
gcc/ChangeLog:
* config/arm/arm-mve-builtins-shapes.cc (parse_type): Fix access
to acle_vector_types.
* config/arm/arm-mve-builtins.cc (wrap_type_in_struct): New.
(register_type_decl): New.
(register_builtin_tuple_types): Fix support for tuples.
(function_resolver::infer_tuple_type): New.
* config/arm/arm-mve-builtins.h
(function_resolver::infer_tuple_type): Declare.
(function_instance::tuple_type): Fix access to acle_vector_types.
Remove floating-point condition from mve_vec_extract_sext_internal and
mve_vec_extract_zext_internal, since the MVE_2 iterator does not
include any FP mode.
gcc/ChangeLog:
* config/arm/mve.md (mve_vec_extract_sext_internal): Fix
condition.
(mve_vec_extract_zext_internal): Likewise.
vstrq_impl derives from store_truncating and vldrq_impl derives from
load_extending which both implement call_properties.
No need to re-implement them in the derived classes.
gcc/ChangeLog:
* config/arm/arm-mve-builtins-base.cc (vstrq_impl): Remove
call_properties.
(vldrq_impl): Likewise.
This patch adds the load_gather_base shape description.
Unlike other load_gather shapes, this one does not support overloaded
forms.
gcc/ChangeLog:
* config/arm/arm-mve-builtins-shapes.cc (struct
load_gather_base_def): New.
* config/arm/arm-mve-builtins-shapes.h: (load_gather_base): New.
This patch adds support to check that an immediate is a multiple of a
given value in a given range.
This will be used for instance by scatter_base to check that offset is
in +/-4*[0..127].
Unlike require_immediate_range, require_immediate_range_multiple
accepts signed range bounds to handle the above case.
gcc/ChangeLog:
* config/arm/arm-mve-builtins.cc (report_out_of_range_multiple):
New.
(function_checker::require_signed_immediate): New.
(function_checker::require_immediate_range_multiple): New.
* config/arm/arm-mve-builtins.h
(function_checker::require_immediate_range_multiple): New.
(function_checker::require_signed_immediate): New.
This patch adds the store_scatter_offset shape and uses a new helper
class (store_scatter), which will also be used by later patches.
gcc/ChangeLog:
* config/arm/arm-mve-builtins-shapes.cc (struct store_scatter): New.
(struct store_scatter_offset_def): New.
* config/arm/arm-mve-builtins-shapes.h (store_scatter_offset): New.
This new helper returns true if the mode suffix goes after the
predicate suffix. This is true in most cases, so the base
implementations in nonoverloaded_base and overloaded_base return true.
For instance: vaddq_m_n_s32.
This will be useful in later patches to implement
vstr?q_scatter_offset_p (_p appears after _offset).
gcc/ChangeLog:
* config/arm/arm-mve-builtins-shapes.cc (struct
nonoverloaded_base): Implement mode_after_pred.
(struct overloaded_base): Likewise.
* config/arm/arm-mve-builtins.cc (function_builder::get_name):
Call mode_after_pred as needed.
* config/arm/arm-mve-builtins.h (function_shape): Add
mode_after_pred.
Hi,
this patch makes genrecog split its output into separate files (10 by
default) in the same vein genemit does. The changes are mostly
mechanical again, changing printfs and puts to fprintf.
As insn-recog.cc relies on being able to call other recog functions a
header insn-recog.h is introduced that pre declares all of those.
For simplicity the number of files is determined by (re-using)
--with-insnemit-partitions. Naming suggestions welcome :)
Bootstrapped and regtested on x86 and power10, regtested on riscv.
aarch64 bootstrap is currently blocked because of the
"maybe uninitialized" issue discussed on IRC.
Regards
Robin
PR target/111600
gcc/ChangeLog:
* Makefile.in: Add insn-recog split.
* configure: Regenerate.
* configure.ac: Document that the number of insnemit partitions is
used for insn-recog as well.
* genconditions.cc (write_one_condition): Use fprintf.
* genpreds.cc (write_predicate_expr): Ditto.
(write_init_reg_class_start_regs): Ditto.
* genrecog.cc (write_header): Add header file to includes.
(printf_indent): Use fprintf.
(change_state): Ditto.
(print_code): Ditto.
(print_host_wide_int): Ditto.
(print_parameter_value): Ditto.
(print_test_rtx): Ditto.
(print_nonbool_test): Ditto.
(print_label_value): Ditto.
(print_test): Ditto.
(print_decision): Ditto.
(print_state): Ditto.
(print_subroutine_call): Ditto.
(print_acceptance): Ditto.
(print_subroutine_start): Ditto.
(print_pattern): Ditto.
(print_subroutine): Ditto.
(print_subroutine_group): Ditto.
(handle_arg): Add -O and -H for output and header file handling.
(main): Use callback.
* gentarget-def.cc (def_target_insn): Use fprintf.
* read-md.cc (md_reader::print_c_condition): Ditto.
* read-md.h (class md_reader): Ditto.
I noticed a -Wmaybe-uninitialized warning for this function, which turns
out to be correct. If the caller passes a valid std::ios_base::seekdir
value then there's no problem, but if they pass std::seekdir(999) then
we don't initialize the __base variable before adding it to __off.
Rather than initialize it to an arbitrary value, we should return an
error.
Also add [[unlikely]] attributes to the paths that return an error.
libstdc++-v3/ChangeLog:
* include/std/spanstream (basic_spanbuf::seekoff): Return an
error for invalid seekdir values.
Although this should never make a difference for sensible code, we
should really make the expression in the noexcept-specifier match the
expression in the function body.
libstdc++-v3/ChangeLog:
* include/bits/ranges_cmp.h (not_equal_to): Make order of
expressions in noexcept-specifier match the body.
* testsuite/20_util/function_objects/range.cmp/not_equal_to.cc:
Check noexcept.
This sets the L1 data cache size for some cores based on their size in their
Technical Reference Manuals.
Today the port minimum is 256 bytes as explained in commit
g:9a99559a478111f7fbeec29bd78344df7651c707, however like Neoverse V2 most cores
actually define the L1 cache size as 64-bytes. The generic Armv9-A model was
already changed in g:f000cb8cbc58b23a91c84d47d69481904981a1d9 and this
change follows suite for a few other cores based on their TRMs.
This results in less memory pressure when running on large core count machines.
gcc/ChangeLog:
* config/aarch64/tuning_models/cortexx925.h: Set L1 cache size to 64b.
* config/aarch64/tuning_models/neoverse512tvb.h: Likewise.
* config/aarch64/tuning_models/neoversen1.h: Likewise.
* config/aarch64/tuning_models/neoversen2.h: Likewise.
* config/aarch64/tuning_models/neoversen3.h: Likewise.
* config/aarch64/tuning_models/neoversev1.h: Likewise.
* config/aarch64/tuning_models/neoversev2.h: Likewise.
(neoversev2_prefetch_tune): Removed.
* config/aarch64/tuning_models/neoversev3.h: Likewise.
* config/aarch64/tuning_models/neoversev3ae.h: Likewise.
GCC 15 added two new fusions CMP+CSEL and CMP+CSET.
This patch enables them for cores that support based on their Software
Optimization Guides and generically on Armv9-A. Even if a core does not
support it there's no negative performance impact.
gcc/ChangeLog:
* config/aarch64/aarch64-fusion-pairs.def (AARCH64_FUSE_NEOVERSE_BASE):
New.
* config/aarch64/tuning_models/neoverse512tvb.h: Use it.
* config/aarch64/tuning_models/neoversen2.h: Use it.
* config/aarch64/tuning_models/neoversen3.h: Use it.
* config/aarch64/tuning_models/neoversev1.h: Use it.
* config/aarch64/tuning_models/neoversev2.h: Use it.
* config/aarch64/tuning_models/neoversev3.h: Use it.
* config/aarch64/tuning_models/neoversev3ae.h: Use it.
* config/aarch64/tuning_models/cortexx925.h: Add fusions.
* config/aarch64/tuning_models/generic_armv9_a.h: Add fusions.
As mentioned in the PR, the addition of vec_addsubv2sf3 expander caused
the testcase to be vectorized and no longer to use fma.
The following patch adds new expanders so that it can be vectorized
again with the alternating add/sub fma instructions.
There is some bug on the slp cost computation side which causes it
not to count some scalar multiplication costs, but I think the patch
is desirable anyway before that is fixed and the testcase for now just
uses -fvect-cost-model=unlimited.
2024-12-13 Jakub Jelinek <jakub@redhat.com>
PR target/116979
* config/i386/mmx.md (vec_fmaddsubv2sf4, vec_fmsubaddv2sf4): New
define_expand patterns.
* gcc.target/i386/pr116979.c: New test.
This patch adds a second variant to implement the extract/slide1up
pattern. In order to do a permutation like
<3, 4, 5, 6> from vectors <0, 1, 2, 3> and <4, 5, 6, 7>
we currently extract <3> from the first vector and re-insert it into the
second vector. Unless register-file crossing latency is essentially
zero it should be preferable to first slide the second vector up by
one, then slide down the first vector by (nunits - 1).
gcc/ChangeLog:
* config/riscv/riscv-protos.h (riscv_register_move_cost):
Export.
* config/riscv/riscv-v.cc (shuffle_extract_and_slide1up_patterns):
Rename...
(shuffle_off_by_one_patterns): ... to this and add slideup/slidedown
variant.
(expand_vec_perm_const_1): Call renamed function.
* config/riscv/riscv.cc (riscv_secondary_memory_needed): Remove
static.
(riscv_register_move_cost): Add VR<->GR/FR handling.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/pr112599-2.c: Adjust test
expectation.
This adds handling for even/odd patterns.
gcc/ChangeLog:
* config/riscv/riscv-v.cc (shuffle_even_odd_patterns): New
function.
(expand_vec_perm_const_1): Use new function.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/vls-vlmax/shuffle-evenodd-run.c: New test.
* gcc.target/riscv/rvv/autovec/vls-vlmax/shuffle-evenodd.c: New test.
This patch adds efficient handling of interleaving patterns like
[0 4 1 5] to vec_perm_const. It is implemented by a slideup and a
gather.
gcc/ChangeLog:
* config/riscv/riscv-v.cc (shuffle_interleave_patterns): New
function.
(expand_vec_perm_const_1): Use new function.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/vls-vlmax/shuffle-interleave-run.c: New test.
* gcc.target/riscv/rvv/autovec/vls-vlmax/shuffle-interleave.c: New test.
This patch adds a shuffle_slide_patterns to expand_vec_perm_const.
It recognizes permutations like
{0, 1, 4, 5}
or
{2, 3, 6, 7}
which can be constructed by a slideup or slidedown of one of the vectors
into the other one.
gcc/ChangeLog:
* config/riscv/riscv-v.cc (shuffle_slide_patterns): New.
(expand_vec_perm_const_1): Call new function.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/vls-vlmax/shuffle-slide-run.c: New test.
* gcc.target/riscv/rvv/autovec/vls-vlmax/shuffle-slide.c: New test.
In PR117353 and PR117878 we expand a const vector during reload. For
this we use an unpredicated left shift. Normally an insn like this is
split but as we introduce it late and cannot create pseudos anymore
it remains unpredicated and is not recognized by the vsetvl pass (where
we expect all insns to be in predicated RVV format).
This patch directly emits a predicated shift instead. We could
distinguish between !lra_in_progress and lra_in_progress and emit
an unpredicated shift in the former case but we're not very likely
to optimize it anyway so it doesn't seem worth it.
PR target/117353
PR target/117878
gcc/ChangeLog:
* config/riscv/riscv-v.cc (expand_const_vector): Use predicated
instead of simple shift.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/pr117353.c: New test.
Some directives in the test were #errror rather than #error.
2024-12-13 Jakub Jelinek <jakub@redhat.com>
* c-c++-common/cpp/embed-1.c: Use #error rather than #errror.