在 Linux / Bash 中随机改组行
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/17578873/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Randomly shuffling lines in Linux / Bash
提问by Code Geas Coder
I have some files in linux. For example 2 and i need shuffling the files in one file.
我在 linux 中有一些文件。例如 2,我需要将文件混洗到一个文件中。
For example
例如
$cat file1
line 1
line 2
line 3
line 4
line 5
line 6
line 7
line 8
and
和
$cat file2
linea one
linea two
linea three
linea four
linea five
linea six
linea seven
linea eight
And later that i shuffling the two files i can obtain something like:
后来我改组了这两个文件,我可以获得如下内容:
linea eight
line 4
linea five
line 1
linea three
line 8
linea seven
line 5
linea two
linea one
line 2
linea four
line 7
linea six
line 1
line 6
回答by Gilles Quenot
You should use shuf
command =)
你应该使用shuf
命令 =)
cat file1 file2 | shuf
Or with Perl :
或使用 Perl :
cat file1 file2 | perl -MList::Util=shuffle -wne 'print shuffle <>;'
回答by Kent
I would use shuf
too.
我也会用shuf
。
another option, gnu sort has:
另一种选择,gnu sort 有:
-R, --random-sort
sort by random hash of keys
you could try:
你可以试试:
cat file1 file2|sort -R
回答by jm666
Sort: (similar lines will be put together)
排序:(类似的行会放在一起)
cat file1 file2 | sort -R
Shuf:
舒夫:
cat file1 file2 | shuf
Perl:
珀尔:
cat file1 file2 | perl -MList::Util=shuffle -e 'print shuffle<STDIN>'
BASH:
巴什:
cat file1 file2 | while IFS= read -r line
do
printf "%06d %s\n" $RANDOM "$line"
done | sort -n | cut -c8-
Awk:
惊:
cat file1 file2 | awk 'BEGIN{srand()}{printf "%06d %s\n", rand()*1000000, $ sudo port install coreutils
$ gshuf example.txt # or cat example.txt | gshuf
;}' | sort -n | cut -c8-
回答by Messa
Just a note to OS X users who use MacPorts: the shuf
command is part of coreutils
and is installed under name gshuf
.
只是对使用 MacPorts 的 OS X 用户的说明:该shuf
命令是coreutils
name 的一部分并安装在 name 下gshuf
。
randomize()
{
arguments=("$@")
declare -a out
i="$#"
j="0"
while [[ $i -ge "0" ]] ; do
which=$(random_range "0" "$i")
out[j]=${arguments[$which]}
arguments[!which]=${arguments[i]}
(( i-- ))
(( j++ ))
done
echo ${out[*]}
}
random_range()
{
low=
range=$(( - ))
if [[ range -ne 0 ]]; then
echo $(($low+$RANDOM % $range))
else
echo ""
fi
}
回答by mmore500
This worked for me. It employs the Fisher-Yates shuffle.
这对我有用。它采用Fisher-Yates shuffle。
while read line; do echo $RANDOM $line; done < my_file | sort -n | cut -f2- -d' '
回答by Tyler
Here's a one-liner that doesn't rely on shuf
or sort -R
, which I didn't have on my mac:
这是一个不依赖于shuf
or的单线,sort -R
我的 Mac 上没有:
sort -R file1
This iterates over all the lines in my_file
and reprints them in a randomized order.
这将遍历所有行my_file
并以随机顺序重新打印它们。
回答by davvs
You don't need to use pipes here. Sort alone does this with the file(s) as parameters. I would just do
你不需要在这里使用管道。Sort 单独使用文件作为参数执行此操作。我只会做
sort -R file1 file2
or if you have multiple files
或者如果您有多个文件
shuffle() {
local IFS=$'\n' tail=
while read l; do
if [ $((RANDOM%2)) = 1 ]; then
echo "$l"
else
tail="${tail}\n${l}"
fi
done <
printf "${tail}\n"
}
回答by untore
It is clearly biased rand (like half the time the list will start with the first line) but for some basic randomization with just bash builtins I guess it is fine? Just print each line yes/no then print the rest...
它显然是有偏见的 rand (就像列表中有一半的时间从第一行开始)但是对于一些只使用 bash 内置函数的基本随机化,我想这很好吗?只需打印每一行是/否,然后打印其余的...
##代码##