+ MC146818 RTC
+ i8042 PS/2
+ RNG(使用`rdrand`)
+ + [Devicetree](https://www.devicetree.org/)
+ 通用图形设备抽象层 (Draft)
+ 参考:`lunaix-os/hal/gfxa`
+ 虚拟终端设备接口(POSIX.1-2008, section 11)
## 4. 编译与构建
-**!如果想要立刻构建并运行,请参考4.6!**
+**!如果想要立刻构建并运行,请参考4.7!**
构建该项目需要满足以下条件:
### 4.3 构建选项
-
-假若条件满足,那么可以直接执行`make all`进行构建,完成后可在生成的`build`目录下找到可引导的iso。
-
本项目支持的make命令:
| 命令 | 用途 |
| ------------------------ | ----------------------------------------------- |
-| `make all` | 等价于 `make image` |
-| `make image` | 构建ISO镜像,可直接启动,使用ISO9660文件系统 |
-| `make kernel` | 构建内核ELF镜像,无法直接启动,需要引导程序 |
+| `make all` | 构建内核ELF镜像 |
+| `make rootfs` | 构建根文件系统镜像,将会封装`usr/`下的程序 |
| `make clean` | 删除构建缓存,用于重新构建 |
| `make config` | 配置Lunaix |
Lunaix是一个可配置的内核,允许用户在编译前选择应当包含或移除的功能。
-使用`make config`来进行基于命令行的交互配置。呈现方式采用Shell的形式,所有的配置项按照类似于文件树的形式组织,如单个配置项为一个“文件”,多个配置项组成的配置组为一个目录,呈现形式为方括号`[]`包裹起来的项目。在提示符中输入`usage`并回车可以查看具体的使用方法。
+使用`make config`来进行基于命令行的交互配置。采用TUI呈现,效果类似于menuconfig.
+
+如果因为某种原因,TUI界面无法呈现,那么将会默认使用shell形式的呈现:
+
+所有的配置项按照类似于文件树的形式组织,如单个配置项为一个“文件”,多个配置项组成的配置组为一个目录,呈现形式为方括号`[]`包裹起来的项目。在提示符中输入`usage`并回车可以查看具体的使用方法。
一个最常用的配置可能就是`architecture_support/arch`了,也就是配置Lunaix所面向的指令集。比如,编译一个在x86_64平台上运行的Lunaix,在提示符中输入(**注意等号两侧的空格,这是不能省略的**):
/architecture_support/arch = x86_64
```
-之后输入`exit`保存并推出。而后正常编译。
-
+之后输入`exit`保存并退出。而后正常编译。
-### 4.5 设置å\86\85æ ¸å\90¯å\8a¨å\8f\82æ\95°
+### 4.5 设置内核参数
在 make 的时候通过`CMDLINE`变量可以设置内核启动参数列表。该列表可以包含多个参数,通过一个或多个空格来分割。每个参数可以为键值对 `<key>=<val>` 或者是开关标志位 `<flag>`。目前 Lunaix 支持以下参数:
+ `console=<dev>` 设置系统终端的输入输出设备(tty设备)。其中 `<dev>` 是设备文件路径 (注意,这里的设备文件路径是针对Lunaix,而非Linux)。关于LunaixOS设备文件系统的介绍可参考 Lunaix Wiki(WIP)
++ (参考 4.6)
如果`CMDLINE`未指定,则将会载入默认参数:
**注意:** 根据操作系统和键盘布局的不同,telnet客户端对一些关键键位的映射(如退格,回车)可能有所差别(如某些版本的Linux会将退格键映射为`0x7f`,也就是ASCII的`<DEL>`字符,而非我们熟知`0x08`)。如果读者想要通过串口方式把玩Lunaix,请修改`usr/init/init.c`里面的终端初始化代码,将`VERASE`设置为正确的映射(修改方式可以参考 POSIX termios 的使用方式。由于Lunaix的终端接口的实现是完全兼容POSIX的,读者可以直接去查阅Linux自带的帮助`man termios`,无需作任何的转换)
-### 4.6 测试与体验 Lunaix
-用户可以使用脚本`live_debug.sh` 来快速运行Lunaix。 该脚本自动按照默认的选项构建Lunaix,而后调用 `scripts/qemu.py` 根据配置文件生成QEMU启动参数
-(配置文件位于`scripts/qemus/`)
+### 4.6 Lunaix的启动
-由于该脚本的主要用途是方便作者进行调试,所以在QEMU窗口打开后还需要进行以下动作:
+由于 Lunaix 的定位是内核。为了避免太多的编译时的前置要求,同时为了提高灵活性,我们移除了iso文件的封装功能。目前的 Lunaix 将只会编译出一个 ELF 格式的二进制文件。用户可以根据自己的喜好,使用的不同的方式,不同的 bootloader 来引导 Lunaix.
-1. 使用telnet连接到`localhost:12345`,这里是Lunaix进行标准输入输出所使用的UART映射(QEMU为guest提供UART实现,并将其利用telnet协议重定向到宿主机)
-2. 在GDB窗口中输入`c`然后回车,此时Lunaix开始运行。这样做的目的是允许在QEMU进行模拟前,事先打好感兴趣的断点。
+为了能够使得 Lunaix 能够正确的启动,用户必须设置以下内核参数:
-该脚本的运行需要设置 `ARCH=<isa>` 环境变量,其值需要与编译时制定的值一致。
++ `rootfs=` 指明根目录设备,值为设备文件路径,指向包含根文件系统的磁盘设备,如`/dev/block/sda`。 Lunaix将会在启动之后自动挂在该文件系统到根目录。缺少此选项 Lunaix 将会拒绝启动,并进入 kernel panic (在 Lunaix 的世界里,这个被称之为 Nightmare Moon arrival )
++ `init=` 指明 init 程序的位置,该程序必须放在 `rootfs` 中。改选项为可选设置,其默认值为 `/init`。 init 程序是 Lunaix 在启动后所运行的第一个程序。
-如:
+### 4.7 测试与体验 Lunaix
-```sh
-ARCH=x86_64 ./live_debug.sh
-```
+
+想要快速体验,请跟随以下步骤:
+
+1. 决定一个你想要体验的架构,如 `x86_64`。 (支持:`x86_64`, `i386`)为了叙述方便,这个架构在下文被指代为`<arch>`
+2. 检查你是否安装了: `qemu-system-<arch>`,`gdb`,`python3`,`telnet`,`gcc`
+3. 运行 `make ARCH=<arch> user` 来编译自带的用户程序
+4. 运行 `make ARCH=<arch> rootfs` 来打包根文件系统镜像。(需要本机系统支持 `dd`,`mkfs.ext2`, `mount -o loop`, `mktemp`)
+5. 运行 `ARCH=<arch> live_debug.sh` 来启动
+
+该脚本自动按照默认的选项构建Lunaix,而后调用 `scripts/qemu.py` 根据配置文件生成QEMU启动参数(配置文件位于`scripts/qemus/`)
+
+由于该脚本的主要用途是方便作者进行调试,所以在QEMU窗口打开后还需要进行以下动作:
+
+1. 使用telnet连接到`localhost:12345`,这里是Lunaix进行标准输入输出所使用的UART映射(QEMU为guest提供UART实现,并将其利用telnet协议重定向到宿主机)
+2. 在GDB窗口中输入`c`然后回车,此时Lunaix开始运行。这样做的目的是允许在QEMU进行模拟前,事先打好感兴趣的断点。
## 5. 运行,分支以及 Issue
*.ld
*.log
+scripts/*.tool
+
__pycache__
.config.json
sources([
"boot/mb_parser.c",
"boot/kpt_setup.c",
- "boot/boot_helper.c"
+ "boot/boot_helper.c",
+ "boot/bootmem.c"
])
sources([
#include <lunaix/boot_generic.h>
#include <lunaix/mm/pagetable.h>
+#include <lunaix/mm/pmm.h>
+#include <lunaix/spike.h>
+#include <lunaix/sections.h>
+#include <lunaix/generic/bootmem.h>
-#include "sys/mm/mm_defs.h"
+#include <sys/mm/mm_defs.h>
+#include <sys/boot/bstage.h>
#ifdef CONFIG_ARCH_X86_64
void
boot_clean_arch_reserve(struct boot_handoff* bhctx)
{
- return;
+ pfn_t start;
+
+ start = leaf_count(__ptr(__kboot_start));
+ pmm_unhold_range(start, leaf_count(__ptr(__kboot_end)) - start);
}
#else
vmm_unset_ptes(ptep, count);
}
+#endif
+
+extern void
+mb_parse(struct boot_handoff* bhctx);
+
+struct boot_handoff*
+prepare_boot_handover()
+{
+ struct boot_handoff* handoff;
+
+ handoff = bootmem_alloc(sizeof(*handoff));
+
+ mb_parse(handoff);
-#endif
\ No newline at end of file
+ return handoff;
+}
\ No newline at end of file
--- /dev/null
+#include <lunaix/generic/bootmem.h>
+#include <lunaix/sections.h>
+#include <lunaix/spike.h>
+
+#define BOOTMEM_SIZE (4 * 4096)
+
+static reclaimable char bootmem_pool[BOOTMEM_SIZE];
+static unsigned int pos;
+
+void*
+bootmem_alloc(unsigned int size)
+{
+ ptr_t res;
+
+ res = __ptr(bootmem_pool) + pos;
+
+ size = ROUNDUP(size, 4);
+ pos += size;
+
+ if (pos >= BOOTMEM_SIZE) {
+ asm ("ud2");
+ unreachable;
+ }
+
+ return (void*)res;
+}
+
+void
+bootmem_free(void* ptr)
+{
+ // not need to support, as they are all one-shot
+ return;
+}
\ No newline at end of file
#include "sys/crx.h"
#include "sys/cpu.h"
+ptr_t __multiboot_addr boot_data;
+
void boot_text
-x86_init(struct multiboot_info* mb)
+x86_init(ptr_t mb)
{
- mb_parse(mb);
+ __multiboot_addr = mb;
cr4_setfeature(CR4_OSXMMEXCPT | CR4_OSFXSR | CR4_PSE36);
#include <lunaix/mm/pagetable.h>
#include <lunaix/compiler.h>
+#include <lunaix/sections.h>
#include <sys/boot/bstage.h>
#include <sys/mm/mm_defs.h>
-bridge_farsym(__kexec_start);
-bridge_farsym(__kexec_end);
+#define PF_X 0x1
+#define PF_W 0x2
+#define ksection_maps autogen_name(ksecmap)
+
+extern_autogen(ksecmap);
+
bridge_farsym(__kexec_text_start);
-bridge_farsym(__kexec_text_end);
+bridge_farsym(ksection_maps);
// define the initial page table layout
struct kernel_map;
static struct kernel_map kernel_pt __section(".kpg");
export_symbol(debug, boot, kernel_pt);
-struct kernel_map {
+struct kernel_map
+{
pte_t l0t[_PAGE_LEVEL_SIZE];
pte_t pg_mnt[_PAGE_LEVEL_SIZE];
pte_t* ktep = (pte_t*) kpt_pa->kernel_lfts;
pte_t* boot_l0tep = (pte_t*) kpt_pa;
- set_pte(boot_l0tep, pte_mkhuge(mkpte_prot(KERNEL_DATA)));
+ set_pte(boot_l0tep, pte_mkhuge(mkpte_prot(KERNEL_PGTAB)));
// --- 将内核重映射至高半区 ---
// Hook the kernel reserved LFTs onto L0T
- pte_t pte = mkpte((ptr_t)ktep, KERNEL_DATA);
+ pte_t pte = mkpte((ptr_t)ktep, KERNEL_PGTAB);
for (u32_t i = 0; i < KEXEC_RSVD; i++) {
pte = pte_setpaddr(pte, (ptr_t)&kpt_pa->kernel_lfts[i]);
klptep++;
}
+ struct ksecmap* maps;
+ struct ksection* section;
+ pfn_t pgs;
+ pte_t *kmntep;
+
+ maps = (struct ksecmap*)to_kphysical(__far(ksection_maps));
+ ktep += pfn(to_kphysical(__far(__kexec_text_start)));
+
// Ensure the size of kernel is within the reservation
- pfn_t kimg_pagecount =
- pfn(__far(__kexec_end) - __far(__kexec_start));
- if (kimg_pagecount > KEXEC_RSVD * _PAGE_LEVEL_SIZE) {
+ if (leaf_count(maps->ksize) > KEXEC_RSVD * _PAGE_LEVEL_SIZE)
+ {
// ERROR: require more pages
// here should do something else other than head into blocking
asm("ud2");
}
- // Now, map the kernel
+ // Now, map the sections
- pfn_t kimg_end = pfn(to_kphysical(__far(__kexec_end)));
- pfn_t i = pfn(to_kphysical(__far(__kexec_text_start)));
- ktep += i;
+ for (unsigned int i = 0; i < maps->num; i++)
+ {
+ section = &maps->secs[i];
- // kernel .text
- pte = pte_setprot(pte, KERNEL_EXEC);
- pfn_t ktext_end = pfn(to_kphysical(__far(__kexec_text_end)));
- for (; i < ktext_end; i++) {
- pte = pte_setpaddr(pte, page_addr(i));
- set_pte(ktep, pte);
+ if (section->va < KERNEL_RESIDENT) {
+ continue;
+ }
- ktep++;
- }
+ pte = mkpte_prot(KERNEL_RDONLY);
+ if ((section->flags & PF_X)) {
+ pte = pte_mkexec(pte);
+ }
+ if ((section->flags & PF_W)) {
+ pte = pte_mkwritable(pte);
+ }
- // all remaining kernel sections
- pte = pte_setprot(pte, KERNEL_DATA);
- for (; i < kimg_end; i++) {
- pte = pte_setpaddr(pte, page_addr(i));
- set_pte(ktep, pte);
+ pgs = leaf_count(section->size);
+ for (pfn_t j = 0; j < pgs; j++)
+ {
+ pte = pte_setpaddr(pte, section->pa + page_addr(j));
+ set_pte(ktep, pte);
- ktep++;
+ ktep++;
+ }
}
- // XXX: Mapping the kernel .rodata section?
-
// set mount point
- pte_t* kmntep = (pte_t*) &kpt_pa->l0t[pfn_at(PG_MOUNT_1, L0T_SIZE)];
- set_pte(kmntep, mkpte((ptr_t)kpt_pa->pg_mnt, KERNEL_DATA));
+ kmntep = (pte_t*) &kpt_pa->l0t[pfn_at(PG_MOUNT_1, L0T_SIZE)];
+ set_pte(kmntep, mkpte((ptr_t)kpt_pa->pg_mnt, KERNEL_PGTAB));
// Build up self-reference
int level = (VMS_SELF / L0T_SIZE) & _PAGE_LEVEL_MASK;
- pte = mkpte_root((ptr_t)kpt_pa, KERNEL_DATA);
+ pte = mkpte_root((ptr_t)kpt_pa, KERNEL_PGTAB);
set_pte(&boot_l0tep[level], pte);
}
movw $TSS_SEG, %ax
ltr %ax
- movl $bhctx_buffer, (%esp) # mb_parser.c
+ call prepare_boot_handover
+
+ movl %eax, (%esp)
call kernel_bootstrap
1:
#define __BOOT_CODE__
#include <lunaix/boot_generic.h>
+#include <lunaix/generic/bootmem.h>
+
#include <sys/boot/bstage.h>
#include <sys/boot/multiboot.h>
#include <sys/mm/mempart.h>
-#define BHCTX_ALLOC 4096
-#define MEM_1M 0x100000UL
-
-
-u8_t bhctx_buffer[BHCTX_ALLOC] boot_bss;
-
-#define check_buffer(ptr) \
- if ((ptr) >= ((ptr_t)bhctx_buffer + BHCTX_ALLOC)) { \
- asm("ud2"); \
- }
-
-size_t boot_text
-mb_memcpy(u8_t* destination, u8_t* base, unsigned int size)
-{
- unsigned int i = 0;
- for (; i < size; i++) {
- *(destination + i) = *(base + i);
- }
- return i;
-}
-
-size_t boot_text
-mb_strcpy(char* destination, char* base)
-{
- int i = 0;
- char c = 0;
- while ((c = base[i])) {
- destination[i] = c;
- i++;
- }
-
- destination[++i] = 0;
+#include <klibc/string.h>
- return i;
-}
-
-size_t boot_text
-mb_strlen(char* s)
-{
- int i = 0;
- while (s[i++])
- ;
- return i;
-}
+#define MEM_1M 0x100000UL
-size_t boot_text
-mb_parse_cmdline(struct boot_handoff* bhctx, void* buffer, char* cmdline)
+static void
+mb_parse_cmdline(struct boot_handoff* bhctx, char* cmdline)
{
-#define SPACE ' '
-
- size_t slen = mb_strlen(cmdline);
-
+ size_t slen;
+ char* cmd;
+
+ slen = strlen(cmdline);
if (!slen) {
- return 0;
+ return;
}
- mb_memcpy(buffer, (u8_t*)cmdline, slen);
- bhctx->kexec.len = slen;
- bhctx->kexec.cmdline = buffer;
+ cmd = bootmem_alloc(slen + 1);
+ strncpy(cmd, cmdline, slen);
- return slen;
+ bhctx->kexec.len = slen;
+ bhctx->kexec.cmdline = cmd;
}
-size_t boot_text
+static void
mb_parse_mmap(struct boot_handoff* bhctx,
- struct multiboot_info* mb,
- void* buffer)
+ struct multiboot_info* mb)
{
- struct multiboot_mmap_entry* mb_mmap =
- (struct multiboot_mmap_entry*)__ptr(mb->mmap_addr);
- size_t mmap_len = mb->mmap_length / sizeof(struct multiboot_mmap_entry);
+ struct multiboot_mmap_entry *mb_mmap, *mb_mapent;
+ size_t mmap_len;
+ struct boot_mmapent *bmmap, *bmmapent;
- struct boot_mmapent* bmmap = (struct boot_mmapent*)buffer;
- for (size_t i = 0; i < mmap_len; i++) {
- struct boot_mmapent* bmmapent = &bmmap[i];
- struct multiboot_mmap_entry* mb_mapent = &mb_mmap[i];
+ mb_mmap = (struct multiboot_mmap_entry*)__ptr(mb->mmap_addr);
+ mmap_len = mb->mmap_length / sizeof(*mb_mmap);
- if (mb_mapent->type == MULTIBOOT_MEMORY_AVAILABLE) {
+ bmmap = bootmem_alloc(sizeof(*bmmap) * mmap_len);
+
+ for (size_t i = 0; i < mmap_len; i++) {
+ mb_mapent = &mb_mmap[i];
+ bmmapent = &bmmap[i];
+
+ if (mb_mapent->type == MULTIBOOT_MEMORY_AVAILABLE)
+ {
bmmapent->type = BOOT_MMAP_FREE;
- } else if (mb_mapent->type == MULTIBOOT_MEMORY_ACPI_RECLAIMABLE) {
+ }
+
+ else if (mb_mapent->type == MULTIBOOT_MEMORY_ACPI_RECLAIMABLE)
+ {
bmmapent->type = BOOT_MMAP_RCLM;
- } else {
+ }
+
+ else {
bmmapent->type = BOOT_MMAP_RSVD;
}
bhctx->mem.size = (mb->mem_upper << 10) + MEM_1M;
bhctx->mem.mmap = bmmap;
bhctx->mem.mmap_len = mmap_len;
-
- return mmap_len * sizeof(struct boot_mmapent);
}
-size_t boot_text
+static void
mb_parse_mods(struct boot_handoff* bhctx,
- struct multiboot_info* mb,
- void* buffer)
+ struct multiboot_info* mb)
{
if (!mb->mods_count) {
bhctx->mods.mods_num = 0;
- return 0;
+ return;
}
- struct boot_modent* modents = (struct boot_modent*)buffer;
- struct multiboot_mod_list* mods =
- (struct multiboot_mod_list*)__ptr(mb->mods_addr);
+ struct boot_modent* modents;
+ struct multiboot_mod_list* mods, *mod;
+ size_t name_len;
+ char* mod_name, *cmd;
- ptr_t mod_str_ptr = __ptr(&modents[mb->mods_count]);
+ mods = (struct multiboot_mod_list*)__ptr(mb->mods_addr);
+ modents = bootmem_alloc(sizeof(*modents) * mb->mods_count);
for (size_t i = 0; i < mb->mods_count; i++) {
- struct multiboot_mod_list* mod = &mods[i];
- modents[i] = (struct boot_modent){ .start = mod->mod_start,
- .end = mod->mod_end,
- .str = (char*)mod_str_ptr };
-
- mod_str_ptr += mb_strcpy((char*)mod_str_ptr,
- (char*)__ptr(mod->cmdline));
+ mod = &mods[i];
+ cmd = (char*)__ptr(mod->cmdline);
+ name_len = strlen(cmd);
+ mod_name = bootmem_alloc(name_len + 1);
+
+ modents[i] = (struct boot_modent){
+ .start = mod->mod_start,
+ .end = mod->mod_end,
+ .str = mod_name
+ };
+
+ strncpy(mod_name, cmd, name_len);
}
bhctx->mods.mods_num = mb->mods_count;
bhctx->mods.entries = modents;
-
- return mod_str_ptr - (ptr_t)buffer;
}
-void boot_text
+static void
mb_prepare_hook(struct boot_handoff* bhctx)
{
// nothing to do
}
-void boot_text
+static void
mb_release_hook(struct boot_handoff* bhctx)
{
// nothing to do
}
-#define align_addr(addr) (((addr) + (sizeof(ptr_t) - 1)) & ~(sizeof(ptr_t) - 1))
-
-struct boot_handoff* boot_text
-mb_parse(struct multiboot_info* mb)
+void
+mb_parse(struct boot_handoff* bhctx)
{
- struct boot_handoff* bhctx = (struct boot_handoff*)bhctx_buffer;
- ptr_t bhctx_ex = (ptr_t)&bhctx[1];
+ struct multiboot_info* mb;
- *bhctx = (struct boot_handoff){ };
+ mb = (struct multiboot_info*)__multiboot_addr;
/* Parse memory map */
if ((mb->flags & MULTIBOOT_INFO_MEM_MAP)) {
- bhctx_ex += mb_parse_mmap(bhctx, mb, (void*)bhctx_ex);
- bhctx_ex = align_addr(bhctx_ex);
+ mb_parse_mmap(bhctx, mb);
}
/* Parse cmdline */
if ((mb->flags & MULTIBOOT_INFO_CMDLINE)) {
- bhctx_ex +=
- mb_parse_cmdline(bhctx, (void*)bhctx_ex, (char*)__ptr(mb->cmdline));
- bhctx_ex = align_addr(bhctx_ex);
+ mb_parse_cmdline(bhctx, (char*)__ptr(mb->cmdline));
}
/* Parse sys modules */
if ((mb->flags & MULTIBOOT_INFO_MODS)) {
- bhctx_ex += mb_parse_mods(bhctx, mb, (void*)bhctx_ex);
- bhctx_ex = align_addr(bhctx_ex);
+ mb_parse_mods(bhctx, mb);
}
- check_buffer(bhctx_ex);
-
bhctx->prepare = mb_prepare_hook;
bhctx->release = mb_release_hook;
-
- return bhctx;
}
\ No newline at end of file
#include "sys/mm/mempart64.h"
-.section .boot.bss
+.section .boot.data
.align 8
__tmp_gdt:
.long 0x0
#include "sys/crx.h"
#include "sys/cpu.h"
+ptr_t __multiboot_addr boot_data;
+
void boot_text
-x86_init(struct multiboot_info* mb)
+x86_init(ptr_t mb)
{
- mb_parse(mb);
+ __multiboot_addr = mb;
cr4_setfeature(CR4_PCIDE);
#include <lunaix/mm/pagetable.h>
#include <lunaix/compiler.h>
+#include <lunaix/sections.h>
#include <sys/boot/bstage.h>
#include <sys/mm/mm_defs.h>
#define RSVD_PAGES 32
-bridge_farsym(__kexec_start);
-bridge_farsym(__kexec_end);
+#define ksection_maps autogen_name(ksecmap)
+#define PF_X 0x1
+#define PF_W 0x2
+
+extern_autogen(ksecmap);
+
bridge_farsym(__kexec_text_start);
-bridge_farsym(__kexec_text_end);
+bridge_farsym(ksection_maps);
// define the initial page table layout
struct kernel_map;
static struct kernel_map kpt __section(".kpg");
export_symbol(debug, boot, kpt);
-struct kernel_map {
+struct kernel_map
+{
pte_t l0t[_PAGE_LEVEL_SIZE]; // root table
pte_t l1t_rsvd[_PAGE_LEVEL_SIZE]; // 0~4G reservation
gran = gran >> _PAGE_LEVEL_SHIFT;
if (pte_isnull(pte)) {
- pte = mkpte(alloc_rsvd_page(_allc), KERNEL_DATA);
+ pte = mkpte(alloc_rsvd_page(_allc), KERNEL_PGTAB);
if (to_gran == gran) {
pte = pte_setprot(pte, prot);
}
static void boot_text
do_remap()
{
- struct kernel_map* kpt_pa = (struct kernel_map*)to_kphysical(&kpt);
-
- pte_t* boot_l0tep = (pte_t*) kpt_pa;
- pte_t *klptep, pte;
+ struct kernel_map* kpt_pa;
+ pte_t *boot_l0tep, *klptep, *l1_rsvd;
+ pte_t id_map, pte;
+ ptr_t kstart;
+
+ unsigned int lvl_i = 0;
// identity map the first 4G for legacy compatibility
- pte_t* l1_rsvd = (pte_t*) kpt_pa->l1t_rsvd;
- pte_t id_map = pte_mkhuge(mkpte_prot(KERNEL_DATA));
+ kpt_pa = (struct kernel_map*)to_kphysical(&kpt);
+ boot_l0tep = (pte_t*) kpt_pa;
+ l1_rsvd = (pte_t*) kpt_pa->l1t_rsvd;
+ id_map = pte_mkhuge(mkpte_prot(KERNEL_PGTAB));
- set_pte(boot_l0tep, mkpte((ptr_t)l1_rsvd, KERNEL_DATA));
+ pte = mkpte((ptr_t)l1_rsvd, KERNEL_PGTAB);
+ set_pte(boot_l0tep, pte);
for (int i = 0; i < 4; i++, l1_rsvd++)
{
// Remap the kernel to -2GiB
- int table_usage = 0;
- unsigned int lvl_i = 0;
struct allocator alloc = {
.kpt_pa = kpt_pa,
.pt_usage = 0
};
- prealloc_pt(&alloc, VMAP, KERNEL_DATA, L1T_SIZE);
-
- prealloc_pt(&alloc, PG_MOUNT_1, KERNEL_DATA, LFT_SIZE);
+ prealloc_pt(&alloc, VMAP, KERNEL_PGTAB, L1T_SIZE);
+ prealloc_pt(&alloc, PG_MOUNT_1, KERNEL_PGTAB, LFT_SIZE);
-
- ptr_t kstart = page_aligned(__far(__kexec_text_start));
+ kstart = page_aligned(__far(__kexec_text_start));
#if LnT_ENABLED(3)
size_t gran = L3T_SIZE;
size_t gran = L2T_SIZE;
#endif
- prealloc_pt(&alloc, PMAP, KERNEL_DATA, gran);
- klptep = prealloc_pt(&alloc, kstart, KERNEL_DATA, gran);
+ prealloc_pt(&alloc, PMAP, KERNEL_PGTAB, gran);
+ klptep = prealloc_pt(&alloc, kstart, KERNEL_PGTAB, gran);
klptep += va_level_index(kstart, gran);
- pte = mkpte(0, KERNEL_DATA);
+ pte = mkpte(0, KERNEL_PGTAB);
for (int i = alloc.pt_usage; i < KEXEC_RSVD; i++)
{
pte = pte_setpaddr(pte, (ptr_t)&kpt_pa->krsvd[i]);
set_pte(klptep++, pte);
}
+ struct ksecmap* maps;
+ struct ksection* section;
+ pfn_t pgs;
+
+ maps = (struct ksecmap*)to_kphysical(__far(ksection_maps));
+
// this is the first LFT we hooked on.
- // all these LFT are contig in physical address
+ // all these LFT are contig in physical address
klptep = (pte_t*) &kpt_pa->krsvd[alloc.pt_usage];
-
+ klptep += pfn(to_kphysical(kstart));
+
// Ensure the size of kernel is within the reservation
- int remain = KEXEC_RSVD - table_usage;
- pfn_t kimg_pagecount =
- pfn(__far(__kexec_end) - __far(__kexec_start));
- if (kimg_pagecount > remain * _PAGE_LEVEL_SIZE) {
- // ERROR: require more pages
- // here should do something else other than head into blocking
+ int remain = KEXEC_RSVD - alloc.pt_usage;
+ if (leaf_count(maps->ksize) > remain * _PAGE_LEVEL_SIZE)
+ {
asm("ud2");
}
- // kernel .text
- pfn_t ktext_end = pfn(to_kphysical(__far(__kexec_text_end)));
- pfn_t i = pfn(to_kphysical(kstart));
+ // assume contig kernel vaddrs
+ for (unsigned int i = 0; i < maps->num; i++)
+ {
+ section = &maps->secs[i];
- klptep += i;
- pte = pte_setprot(pte, KERNEL_EXEC);
- for (; i < ktext_end; i++) {
- pte = pte_setpaddr(pte, page_addr(i));
- set_pte(klptep, pte);
+ if (section->va < KERNEL_RESIDENT) {
+ continue;
+ }
- klptep++;
- }
-
- pfn_t kimg_end = pfn(to_kphysical(__far(__kexec_end)));
+ pte = mkpte_prot(KERNEL_RDONLY);
+ if ((section->flags & PF_X)) {
+ pte = pte_mkexec(pte);
+ }
+ if ((section->flags & PF_W)) {
+ pte = pte_mkwritable(pte);
+ }
- // all remaining kernel sections
- pte = pte_setprot(pte, KERNEL_DATA);
- for (; i < kimg_end; i++) {
- pte = pte_setpaddr(pte, page_addr(i));
- set_pte(klptep, pte);
+ pgs = leaf_count(section->size);
+ for (pfn_t j = 0; j < pgs; j++)
+ {
+ pte = pte_setpaddr(pte, section->pa + page_addr(j));
+ set_pte(klptep, pte);
- klptep++;
+ klptep++;
+ }
}
// Build up self-reference
lvl_i = va_level_index(VMS_SELF, L0T_SIZE);
- pte = mkpte_root(__ptr(kpt_pa), KERNEL_DATA);
+ pte = mkpte_root(__ptr(kpt_pa), KERNEL_PGTAB);
set_pte(boot_l0tep + lvl_i, pte);
}
movw $TSS_SEG, %ax
ltr %ax
+ call prepare_boot_handover
+
xorq %rbp, %rbp
- movq $bhctx_buffer, %rdi # mb_parser.c
+ movq %rax, %rdi
call kernel_bootstrap
1:
#include "base_defs.ld.inc"
-__kboot_start = .;
-
-.boot.text BLOCK(PAGE_GRAN) :
+.boot.text BLOCK(PAGE_GRAN) :
{
#if defined(CONFIG_X86_BL_MB) || defined(CONFIG_X86_BL_MB2)
*(.multiboot)
#endif
*(.boot.text)
-}
-
-.boot.bss BLOCK(PAGE_GRAN) :
-{
- *(.boot.bss)
-}
+} : boot_text
-.boot.data BLOCK(PAGE_GRAN) :
+.boot.data BLOCK(PAGE_GRAN) :
{
+ /*
+ We treat our boot.bss as data.
+ to avoid confusing linker and some bootloader
+ */
*(.boot.data)
-}
-
-__kboot_end = ALIGN(PAGE_GRAN);
+ *(.boot.bss)
+} : boot_data
#endif /* __LUNAIX_BOOT_SECS_LD_INC */
#ifndef __LUNAIX_BSTAGE_H
#define __LUNAIX_BSTAGE_H
#include <lunaix/types.h>
+#include <lunaix/boot_generic.h>
+
+extern ptr_t __multiboot_addr;
+
+extern u8_t __kboot_start[];
+extern u8_t __kboot_end[];
#define boot_text __attribute__((section(".boot.text")))
#define boot_data __attribute__((section(".boot.data")))
-#define boot_bss __attribute__((section(".boot.bss")))
+#define boot_bss __attribute__((section(".boot.bss")))
/*
Bridge the far symbol to the vicinity.
code is too far away from the boot code.
*/
#ifdef CONFIG_ARCH_X86_64
-#define bridge_farsym(far_sym) \
+#define __bridge_farsym(far_sym) \
asm( \
- ".section .boot.bss\n" \
+ ".section .boot.data\n" \
".align 8\n" \
".globl __lc_" #far_sym "\n" \
"__lc_" #far_sym ":\n" \
".previous\n" \
); \
extern unsigned long __lc_##far_sym[];
-#define __far(far_sym) (__lc_##far_sym[0])
+#define bridge_farsym(far_sym) __bridge_farsym(far_sym)
+
+#define ___far(far_sym) (__lc_##far_sym[0])
+#define __far(far_sym) ___far(far_sym)
#else
-#define bridge_farsym(far_sym) extern u8_t far_sym[];
-#define __far(far_sym) ((ptr_t)far_sym)
+#define __bridge_farsym(far_sym) extern unsigned long far_sym[]
+#define ___far(far_sym) ((ptr_t)far_sym)
+#define bridge_farsym(far_sym) __bridge_farsym(far_sym);
+#define __far(far_sym) ___far(far_sym)
#endif
-ptr_t remap_kernel();
+ptr_t
+remap_kernel();
#endif /* __LUNAIX_BSTAGE_H */
#define __ASM__
+#include <linking/base_defs.ld.inc>
#include "multiboot.h"
.section .multiboot
__mb_start:
.4byte MULTIBOOT_MAGIC
+/*
+ One may wondering why we set the address header part.
+
+ This is due to some weirdo's patch to QEMU that prohibit loading
+ ELF64 using -kernel option. The only way to get around is to
+ fool multiboot loader by pretend ourselves a non-elf kernel.
+
+ Although one may think this "trick" is some-how not portable,
+ the fact is actually the other-way around. It allow us to relax
+ the assumption that grub (or other multiboot compliances) made on
+ our executable.
+ */
+
#ifdef CONFIG_X86_BL_MB
- #define MB_FLAGS (MULTIBOOT_MEMORY_INFO | MULTIBOOT_PAGE_ALIGN)
+ #define MB_FLAGS ( MULTIBOOT_MEMORY_INFO |\
+ MULTIBOOT_PAGE_ALIGN |\
+ MULTIBOOT_AOUT_KLUDGE )
.4byte MB_FLAGS
.4byte -(MULTIBOOT_MAGIC + MB_FLAGS)
+ .4byte __mb_start
+ .4byte __kload_start
+ .4byte __kload_end
+ .4byte __kbss_end
+ .4byte ENTRY_POINT
#elif CONFIG_X86_BL_MB2
#define HDR_LEN (__mb_end - __mb_start)
.align MULTIBOOT_TAG_ALIGN
__mbir_tag_start:
- .2byte MULTIBOOT_HEADER_TAG_INFORMATION_REQUEST
- .2byte 0
+ .2byte MULTIBOOT_HEADER_TAG_INFORMATION_REQUEST
+ .2byte 0
.4byte __mbir_tag_end - __mbir_tag_start
.4byte MULTIBOOT_TAG_TYPE_CMDLINE
.4byte MULTIBOOT_TAG_TYPE_MMAP
.4byte MULTIBOOT_TAG_TYPE_MODULE
__mbir_tag_end:
+ __maddr_tag_start:
+ .2byte MULTIBOOT_HEADER_TAG_ADDRESS
+ .2byte 0
+ .4byte __maddr_tag_end - __maddr_tag_start
+ .4byte __mb_start
+ .4byte __kload_start
+ .4byte __kload_end
+ .4byte __kbss_end
+ __maddr_tag_end:
+
+ __ment_tag_start:
+ .2byte MULTIBOOT_HEADER_TAG_ENTRY_ADDRESS
+ .2byte 0
+ .4byte __ment_tag_end - __ment_tag_start
+ .4byte ENTRY_POINT
+ __ment_tag_end:
+
.align MULTIBOOT_TAG_ALIGN
.2byte MULTIBOOT_HEADER_TAG_END
.2byte 0
- .4byte 8
+ .4byte 8
#endif
#include <lunaix/mm/pagetable.h>
#include <lunaix/mann_flags.h>
-static inline pte_attr_t
-translate_vmr_prot(unsigned int vmr_prot)
+static inline pte_t
+translate_vmr_prot(unsigned int vmr_prot, pte_t pte)
{
- pte_attr_t _pte_prot = _PTE_U;
- if ((vmr_prot & PROT_READ)) {
- _pte_prot |= _PTE_R;
- }
+ pte = pte_mkuser(pte);
if ((vmr_prot & PROT_WRITE)) {
- _pte_prot |= _PTE_W;
+ pte = pte_mkwritable(pte);
}
- if (!(vmr_prot & PROT_EXEC)) {
- _pte_prot |= _PTE_NX;
+ if ((vmr_prot & PROT_EXEC)) {
+ pte = pte_mkexec(pte);
+ }
+ else {
+ pte = pte_mknonexec(pte);
}
- return _pte_prot;
+ return pte;
}
#define KERNEL_IMG_SIZE __ulong(0x4000000)
#define KERNEL_IMG_END END_POINT(KERNEL_IMG)
-#define PG_MOUNT_1 __ulong(0xc4000000)
+#define PG_MOUNT_1 __ulong(0xc8000000)
#define PG_MOUNT_1_SIZE __ulong(0x1000)
#define PG_MOUNT_1_END END_POINT(PG_MOUNT_1)
-#define PG_MOUNT_2 __ulong(0xc4001000)
+#define PG_MOUNT_2 __ulong(0xc8001000)
#define PG_MOUNT_2_SIZE __ulong(0x1000)
#define PG_MOUNT_2_END END_POINT(PG_MOUNT_2)
-#define PG_MOUNT_3 __ulong(0xc4002000)
+#define PG_MOUNT_3 __ulong(0xc8002000)
#define PG_MOUNT_3_SIZE __ulong(0x1000)
#define PG_MOUNT_3_END END_POINT(PG_MOUNT_3)
-#define PG_MOUNT_4 __ulong(0xc4003000)
+#define PG_MOUNT_4 __ulong(0xc8003000)
#define PG_MOUNT_4_SIZE __ulong(0x1000)
#define PG_MOUNT_4_END END_POINT(PG_MOUNT_4)
-#define PG_MOUNT_VAR __ulong(0xc4004000)
+#define PG_MOUNT_VAR __ulong(0xc8004000)
#define PG_MOUNT_VAR_SIZE __ulong(0x3fc000)
#define PG_MOUNT_VAR_END END_POINT(PG_MOUNT_VAR)
-#define VMAP __ulong(0xc4400000)
-#define VMAP_SIZE __ulong(0x3b400000)
+#define VMAP __ulong(0xc8400000)
+#define VMAP_SIZE __ulong(0x37400000)
#define VMAP_END END_POINT(VMAP)
#define PMAP VMAP
#endif
-#define _PTE_PROT_MASK ( _PTE_W | _PTE_U | _PTE_X )
+#define _PTE_PPFN_MASK ( (~PAGE_MASK & PMS_MASK))
+#define _PTE_PROT_MASK ( ~_PTE_PPFN_MASK )
#define KERNEL_PAGE ( _PTE_P )
#define KERNEL_EXEC ( KERNEL_PAGE | _PTE_X )
#define KERNEL_DATA ( KERNEL_PAGE | _PTE_W | _PTE_NX )
#define KERNEL_RDONLY ( KERNEL_PAGE | _PTE_NX )
#define KERNEL_ROEXEC ( KERNEL_PAGE | _PTE_X )
+#define KERNEL_PGTAB ( KERNEL_PAGE | _PTE_W )
+#define KERNEL_DEFAULT KERNEL_PGTAB
#define USER_PAGE ( _PTE_P | _PTE_U )
#define USER_EXEC ( USER_PAGE | _PTE_X )
#define USER_DATA ( USER_PAGE | _PTE_W | _PTE_NX )
#define USER_RDONLY ( USER_PAGE | _PTE_NX )
#define USER_ROEXEC ( USER_PAGE | _PTE_X )
+#define USER_PGTAB ( USER_PAGE | _PTE_W )
+#define USER_DEFAULT USER_PGTAB
-#define SELF_MAP ( KERNEL_DATA | _PTE_WT | _PTE_CD )
+#define SELF_MAP ( KERNEL_PGTAB | _PTE_WT | _PTE_CD )
#define __mkpte_from(pte_val) ((pte_t){ .val = (pte_val) })
static inline pte_t
pte_setpaddr(pte_t pte, ptr_t paddr)
{
- return __mkpte_from((pte.val & _PAGE_BASE_MASK) | (paddr & ~_PAGE_BASE_MASK));
+ return __mkpte_from((pte.val & _PTE_PROT_MASK) | (paddr & ~_PTE_PROT_MASK));
}
static inline pte_t
pte_setppfn(pte_t pte, pfn_t ppfn)
{
- return __mkpte_from((pte.val & _PAGE_BASE_MASK) | (ppfn * PAGE_SIZE));
+ return pte_setpaddr(pte, ppfn * PAGE_SIZE);
}
static inline ptr_t
pte_paddr(pte_t pte)
{
- return __paddr(pte.val) & ~_PAGE_BASE_MASK;
+ return __paddr(pte.val) & ~_PTE_PROT_MASK;
}
static inline pfn_t
pte_ppfn(pte_t pte)
{
- return __paddr(pte.val) >> _PAGE_BASE_SHIFT;
+ return pte_paddr(pte) >> _PAGE_BASE_SHIFT;
}
static inline pte_t
#include <lunaix/mm/page.h>
#include <lunaix/mm/pagetable.h>
-
-extern unsigned int __kexec_end[];
+#include <lunaix/sections.h>
void
pmm_arch_init_pool(struct pmem* memory)
ptr_t
pmm_arch_init_remap(struct pmem* memory, struct boot_handoff* bctx)
{
- size_t ppfn_total = pfn(bctx->mem.size) + 1;
+ size_t ppfn_total = pfn(bctx->mem.size);
size_t pool_size = ppfn_total * sizeof(struct ppage);
size_t i = 0;
return 0;
found:;
- ptr_t kexec_end = to_kphysical(__kexec_end);
+ ptr_t kexec_end = to_kphysical(kernel_start);
ptr_t aligned_pplist = MAX(ent->start, kexec_end);
#ifdef CONFIG_ARCH_X86_64
use("timer")
use("bus")
+if config("use_devicetree"):
+ sources("devtree.c")
\ No newline at end of file
def hal():
""" Lunaix hardware asbtraction layer """
- pass
\ No newline at end of file
+ @Term("Devicetree for hardware discovery")
+ def use_devicetree():
+ """
+ Decide whether to use Devicetree for platform
+ resource topology sensing.
+
+ On some architecture, Lunaix will fallback to use
+ devicetree when ran out of options. For others, such
+ as those designed with embedded support in mind,
+ devicetree might be mandatory and perhaps the only
+ way.
+ """
+
+ type(bool)
+ default(not v(arch).startswith("x86"))
+
+ @ReadOnly
+ @Term("Maximum size of device tree blob (in KiB)")
+ def dtb_maxsize():
+ """
+ Maximum size for a firmware provided device tree blob
+ """
+
+ type(int)
+ default(256)
+
+ return v(use_devicetree)
\ No newline at end of file
--- /dev/null
+#include <lunaix/mm/valloc.h>
+#include <lunaix/syslog.h>
+
+#include <klibc/string.h>
+
+#include <hal/devtree.h>
+
+LOG_MODULE("dtb")
+
+static struct dt_context dtctx;
+
+void
+fdt_itbegin(struct fdt_iter* fdti, struct fdt_header* fdt_hdr)
+{
+ unsigned int off_struct, off_str;
+ struct fdt_token* tok;
+ const char* str_blk;
+
+ off_str = le(fdt_hdr->off_dt_strings);
+ off_struct = le(fdt_hdr->off_dt_struct);
+
+ tok = offset_t(fdt_hdr, struct fdt_token, off_struct);
+ str_blk = offset_t(fdt_hdr, const char, off_str);
+
+ *fdti = (struct fdt_iter) {
+ .pos = tok,
+ .str_block = str_blk
+ };
+}
+
+void
+fdt_itend(struct fdt_iter* fdti)
+{
+ fdti->pos = NULL;
+}
+
+bool
+fdt_itnext(struct fdt_iter* fdti)
+{
+ struct fdt_token *current;
+ struct fdt_prop *prop;
+
+ current = fdti->pos;
+ if (!current) {
+ return false;
+ }
+
+ do
+ {
+ if (fdt_nope(current)) {
+ continue;
+ }
+
+ if (fdt_prop(current)) {
+ prop = (struct fdt_prop*) current;
+ current = offset(current, prop->len);
+ continue;
+ }
+
+ if (fdt_node_end(current)) {
+ fdti->depth--;
+ continue;
+ }
+
+ // node begin
+
+ fdti->depth++;
+ if (fdti->depth == 1) {
+ // enter root node
+ break;
+ }
+
+ while (!fdt_prop(current) && !fdt_node_end(current)) {
+ current++;
+ }
+
+ if (fdt_prop(current)) {
+ break;
+ }
+
+ current++;
+
+ } while (fdt_nope(current) && fdti->depth > 0);
+
+ return fdti->depth > 0;
+}
+
+bool
+fdt_itnext_at(struct fdt_iter* fdti, int level)
+{
+ while (fdti->depth != level && fdt_itnext(fdti));
+
+ return fdti->depth == level;
+}
+
+void
+fdt_memrsvd_itbegin(struct fdt_memrsvd_iter* rsvdi,
+ struct fdt_header* fdt_hdr)
+{
+ size_t off = le(fdt_hdr->off_mem_rsvmap);
+
+ rsvdi->block =
+ offset_t(fdt_hdr, typeof(*rsvdi->block), off);
+
+ rsvdi->block = &rsvdi->block[-1];
+}
+
+bool
+fdt_memrsvd_itnext(struct fdt_memrsvd_iter* rsvdi)
+{
+ struct fdt_memrsvd_ent* ent;
+
+ ent = rsvdi->block;
+ if (!ent) {
+ return false;
+ }
+
+ rsvdi->block++;
+
+ return ent->addr || ent->size;
+}
+
+void
+fdt_memrsvd_itend(struct fdt_memrsvd_iter* rsvdi)
+{
+ rsvdi->block = NULL;
+}
+
+static inline bool
+propeq(struct fdt_iter* it, const char* key)
+{
+ return streq(fdtit_prop_key(it), key);
+}
+
+static inline void
+__mkprop_val32(struct fdt_iter* it, struct dt_prop_val* val)
+{
+ val->u32_val = le(*(u32_t*)&it->prop[1]);
+ val->size = le(it->prop->len);
+}
+
+static inline void
+__mkprop_val64(struct fdt_iter* it, struct dt_prop_val* val)
+{
+ val->u64_val = le64(*(u64_t*)&it->prop[1]);
+ val->size = le(it->prop->len);
+}
+
+static inline void
+__mkprop_ptr(struct fdt_iter* it, struct dt_prop_val* val)
+{
+ val->ptr_val = __ptr(&it->prop[1]);
+ val->size = le(it->prop->len);
+}
+
+static inline u32_t
+__prop_getu32(struct fdt_iter* it)
+{
+ return le(*(u32_t*)&it->prop[1]);
+}
+
+static bool
+__parse_stdbase_prop(struct fdt_iter* it, struct dt_node_base* node)
+{
+ struct fdt_prop* prop;
+
+ prop = it->prop;
+
+ if (propeq(it, "compatible")) {
+ __mkprop_ptr(it, &node->compat);
+ }
+
+ else if (propeq(it, "model")) {
+ node->model = (const char*)&prop[1];
+ }
+
+ else if (propeq(it, "phandle")) {
+ node->phandle = __prop_getu32(it);
+ hashtable_hash_in(dtctx.phnds_table,
+ &node->phnd_link, node->phandle);
+ }
+
+ else if (propeq(it, "#address-cells")) {
+ node->addr_c = (char)__prop_getu32(it);
+ }
+
+ else if (propeq(it, "#size-cells")) {
+ node->sz_c = (char)__prop_getu32(it);
+ }
+
+ else if (propeq(it, "#interrupt-cells")) {
+ node->intr_c = (char)__prop_getu32(it);
+ }
+
+ else if (propeq(it, "status")) {
+ char peek = *(char*)&it->prop[1];
+ if (peek == 'o') {
+ node->status = STATUS_OK;
+ }
+ else if (peek == 'r') {
+ node->status = STATUS_RSVD;
+ }
+ else if (peek == 'd') {
+ node->status = STATUS_DISABLE;
+ }
+ else if (peek == 'f') {
+ node->status = STATUS_FAIL;
+ }
+ }
+
+ else {
+ return false;
+ }
+
+ return true;
+}
+
+static bool
+__parse_stdnode_prop(struct fdt_iter* it, struct dt_node* node)
+{
+ if (propeq(it, "reg")) {
+ __mkprop_ptr(it, &node->reg);
+ }
+
+ else if (propeq(it, "virtual-reg")) {
+ __mkprop_ptr(it, &node->vreg);
+ }
+
+ else if (propeq(it, "ranges")) {
+ __mkprop_ptr(it, &node->ranges);
+ }
+
+ else if (propeq(it, "dma-ranges")) {
+ __mkprop_ptr(it, &node->dma_ranges);
+ }
+
+ else {
+ return false;
+ }
+
+ return true;
+}
+
+static bool
+__parse_stdintr_prop(struct fdt_iter* it, struct dt_intr_node* node)
+{
+ if (propeq(it, "interrupt-map")) {
+ __mkprop_ptr(it, &node->intr_map);
+ }
+
+ else if (propeq(it, "interrupt-map-mask")) {
+ __mkprop_ptr(it, &node->intr_map_mask);
+ }
+
+ else if (propeq(it, "interrupt-parent")) {
+ node->parent_hnd = __prop_getu32(it);
+ }
+
+ else if (propeq(it, "interrupt-extended")) {
+ node->intr.extended = true;
+ __mkprop_ptr(it, &node->intr.arr);
+ }
+
+ else if (!node->intr.extended && propeq(it, "interrupts")) {
+ __mkprop_ptr(it, &node->intr.arr);
+ }
+
+ else {
+ return false;
+ }
+
+ return true;
+}
+
+static bool
+__parse_stdflags(struct fdt_iter* it, struct dt_node_base* node)
+{
+ if (propeq(it, "dma-coherent")) {
+ node->dma_coherent = true;
+ }
+
+ else if (propeq(it, "dma-noncoherent")) {
+ node->dma_ncoherent = true;
+ }
+
+ else if (propeq(it, "interrupt-controller")) {
+ node->intr_controll = true;
+ }
+
+ else {
+ return false;
+ }
+
+ return true;
+}
+
+static void
+__parse_other_prop(struct fdt_iter* it, struct dt_node_base* node)
+{
+ struct dt_prop* prop;
+ const char* key;
+ unsigned int hash;
+
+ prop = valloc(sizeof(*prop));
+ key = fdtit_prop_key(it);
+
+ prop->key = HSTR(key, strlen(key));
+ __mkprop_ptr(it, &prop->val);
+
+ hstr_rehash(&prop->key, HSTR_FULL_HASH);
+ hash = prop->key.hash;
+
+ hashtable_hash_in(node->_op_bucket, &prop->ht, hash);
+}
+
+static void
+__fill_node(struct fdt_iter* it, struct dt_node* node)
+{
+ if (__parse_stdflags(it, &node->base)) {
+ return;
+ }
+
+ if (__parse_stdbase_prop(it, &node->base)) {
+ return;
+ }
+
+ if (__parse_stdnode_prop(it, node)) {
+ return;
+ }
+
+ if (__parse_stdintr_prop(it, &node->intr)) {
+ return;
+ }
+
+ __parse_other_prop(it, &node->base);
+}
+
+static void
+__fill_root(struct fdt_iter* it, struct dt_root* node)
+{
+ if (__parse_stdflags(it, &node->base)) {
+ return;
+ }
+
+ if (__parse_stdbase_prop(it, &node->base)) {
+ return;
+ }
+
+ struct fdt_prop* prop;
+
+ prop = it->prop;
+ if (propeq(it, "serial-number")) {
+ node->serial = (const char*)&prop[1];
+ }
+
+ else if (propeq(it, "chassis-type")) {
+ node->chassis = (const char*)&prop[1];
+ }
+
+ __parse_other_prop(it, &node->base);
+}
+
+static inline void
+__init_node(struct dt_node_base* node)
+{
+ hashtable_init(node->_op_bucket);
+ llist_init_head(&node->children);
+}
+
+static inline void
+__init_node_regular(struct dt_node* node)
+{
+ __init_node(&node->base);
+ node->intr.parent_hnd = PHND_NULL;
+}
+
+static void
+__expand_extended_intr(struct dt_intr_node* intrupt)
+{
+ struct dt_prop_iter it;
+ struct dt_prop_val arr;
+ struct dt_node *node;
+ struct dt_node *master;
+ struct dt_intr_prop* intr_prop;
+
+ if (!intrupt->intr.extended) {
+ return;
+ }
+
+ arr = intrupt->intr.arr;
+ node = DT_NODE(intrupt);
+
+ llist_init_head(&intrupt->intr.values);
+
+ dt_decode(&it, &node->base, &arr, 1);
+
+ dt_phnd_t phnd;
+ while(dtprop_next(&it)) {
+ phnd = dtprop_to_u32(it.prop_loc);
+ master = dt_resolve_phandle(phnd);
+
+ if (!master) {
+ WARN("dtb: (intr_extended) malformed phandle: %d", phnd);
+ continue;
+ }
+
+ intr_prop = valloc(sizeof(*intr_prop));
+
+ intr_prop->master = &master->intr;
+ intr_prop->val = (struct dt_prop_val) {
+ .encoded = it.prop_loc_next,
+ .size = master->base.intr_c
+ };
+
+ llist_append(&intrupt->intr.values, &intr_prop->props);
+ dtprop_next_n(&it, intr_prop->val.size);
+ }
+}
+
+static void
+__resolve_phnd_references()
+{
+ struct dt_node_base *pos, *n;
+ struct dt_node *node, *parent, *default_parent;
+ struct dt_intr_node* intrupt;
+ dt_phnd_t phnd;
+
+ llist_for_each(pos, n, &dtctx.nodes, nodes)
+ {
+ node = (struct dt_node*)pos;
+ intrupt = &node->intr;
+ if (!node->base.intr_c) {
+ continue;
+ }
+
+ phnd = intrupt->parent_hnd;
+ default_parent = (struct dt_node*)node->base.parent;
+ parent = default_parent;
+
+ if (phnd != PHND_NULL) {
+ parent = dt_resolve_phandle(phnd);
+ }
+
+ if (!parent) {
+ WARN("dtb: (phnd_resolve) malformed phandle: %d", phnd);
+ parent = default_parent;
+ }
+
+ intrupt->parent = &parent->intr;
+
+ __expand_extended_intr(intrupt);
+ }
+}
+
+bool
+dt_load(ptr_t dtb_dropoff)
+{
+ dtctx.reloacted_dtb = dtb_dropoff;
+
+ if (dtctx.fdt->magic != FDT_MAGIC) {
+ ERROR("invalid dtb, unexpected magic: 0x%x", dtctx.fdt->magic);
+ return false;
+ }
+
+ size_t str_off = le(dtctx.fdt->size_dt_strings);
+ dtctx.str_block = offset_t(dtb_dropoff, const char, str_off);
+
+ llist_init_head(&dtctx.nodes);
+ hashtable_init(dtctx.phnds_table);
+
+ struct fdt_iter it;
+ struct fdt_token* tok;
+ struct dt_node_base *node, *prev;
+
+ struct dt_node_base* depth[16];
+ bool is_root_level, filled;
+
+ node = NULL;
+ depth[0] = NULL;
+ fdt_itbegin(&it, dtctx.fdt);
+
+ while (fdt_itnext(&it)) {
+ is_root_level = it.depth == 1;
+
+ if (it.depth >= 16) {
+ // tree too deep
+ ERROR("strange dtb, too deep to dive.");
+ return false;
+ }
+
+ depth[it.depth] = NULL;
+ node = depth[it.depth - 1];
+
+ if (!node) {
+ // need new node
+ if (unlikely(is_root_level)) {
+ node = valloc(sizeof(struct dt_root));
+ __init_node(node);
+ }
+ else {
+ node = valloc(sizeof(struct dt_node));
+ prev = depth[it.depth - 2];
+
+ __init_node_regular((struct dt_node*)node);
+ llist_append(&prev->children, &node->siblings);
+ node->parent = prev;
+
+ llist_append(&dtctx.nodes, &node->nodes);
+ }
+
+ node->name = (const char*)&it.pos[1];
+ }
+
+ if (unlikely(is_root_level)) {
+ __fill_root(&it, (struct dt_root*)node);
+ }
+ else {
+ __fill_node(&it, (struct dt_node*)node);
+ }
+ }
+
+ fdt_itend(&it);
+
+ dtctx.root = (struct dt_root*)depth[0];
+
+ __resolve_phnd_references();
+
+ return true;
+}
+
+static bool
+__name_starts_with(struct dt_node_base* node, const char* name)
+{
+ int i = 0;
+ const char* be_matched = node->name;
+
+ while (be_matched[i] && name[i])
+ {
+ if (be_matched[i] != name[i]) {
+ return false;
+ }
+ }
+
+ return true;
+}
+
+struct dt_node*
+dt_resolve_phandle(dt_phnd_t phandle)
+{
+ struct dt_node_base *pos, *n;
+ hashtable_hash_foreach(dtctx.phnds_table, phandle, pos, n, phnd_link)
+ {
+ if (pos->phandle == phandle) {
+ return (struct dt_node*)pos;
+ }
+ }
+
+ return NULL;
+}
+
+void
+dt_begin_find(struct dt_node_iter* iter,
+ struct dt_node* node, const char* name)
+{
+ node = node ? : (struct dt_node*)dtctx.root;
+
+ iter->head = &node->base;
+ iter->matched = NULL;
+ iter->name = name;
+
+ struct dt_node_base *pos, *n;
+ llist_for_each(pos, n, &node->base.children, siblings)
+ {
+ if (__name_starts_with(pos, name)) {
+ iter->matched = pos;
+ break;
+ }
+ }
+}
+
+bool
+dt_find_next(struct dt_node_iter* iter,
+ struct dt_node_base** matched)
+{
+ if (!dt_found_any(iter)) {
+ return false;
+ }
+
+ struct dt_node_base *pos, *head;
+
+ head = iter->head;
+ pos = iter->matched;
+ *matched = pos;
+
+ while (&pos->siblings != &head->children)
+ {
+ pos = list_next(pos, struct dt_node_base, siblings);
+
+ if (!__name_starts_with(pos, iter->name)) {
+ continue;
+ }
+
+ iter->matched = pos;
+ return true;
+ }
+
+ return false;
+}
+
+struct dt_prop_val*
+dt_getprop(struct dt_node* node, const char* name)
+{
+ struct hstr hashed_name;
+ struct dt_prop *pos, *n;
+ unsigned int hash;
+
+ hashed_name = HSTR(name, strlen(name));
+ hstr_rehash(&hashed_name, HSTR_FULL_HASH);
+ hash = hashed_name.hash;
+
+ hashtable_hash_foreach(node->base._op_bucket, hash, pos, n, ht)
+ {
+ if (HSTR_EQ(&pos->key, &hashed_name)) {
+ return &pos->val;
+ }
+ }
+
+ return NULL;
+}
\ No newline at end of file
{
if ((tdev->lflags & _ISIG)) {
signal_send(-tdev->fggrp, signal);
+ pwake_all(&tdev->line_in_event);
}
}
\ No newline at end of file
--- /dev/null
+#ifndef __LUNAIX_DEVTREE_H
+#define __LUNAIX_DEVTREE_H
+
+#include <lunaix/types.h>
+#include <lunaix/ds/llist.h>
+#include <lunaix/ds/hstr.h>
+#include <lunaix/ds/hashtable.h>
+#include <lunaix/boot_generic.h>
+
+#define le(v) ((((v) >> 24) & 0x000000ff) |\
+ (((v) << 8) & 0x00ff0000) |\
+ (((v) >> 8) & 0x0000ff00) |\
+ (((v) << 24) & 0xff000000))
+
+#define le64(v) (((u64_t)le(v & 0xffffffff) << 32) | le(v >> 32))
+
+#define be(v) ((((v) << 24) & 0x000000ff) |\
+ (((v) >> 8) & 0x00ff0000) |\
+ (((v) << 8) & 0x0000ff00) |\
+ (((v) >> 24) & 0xff000000))
+
+#define FDT_MAGIC be(0xd00dfeed)
+#define FDT_NOD_BEGIN be(0x00000001)
+#define FDT_NOD_END be(0x00000002)
+#define FDT_PROP be(0x00000003)
+#define FDT_NOP be(0x00000004)
+#define FDT_END be(0x00000009)
+
+#define STATUS_OK 0
+#define STATUS_DISABLE 1
+#define STATUS_RSVD 2
+#define STATUS_FAIL 3
+
+
+typedef unsigned int* dt_enc_t;
+typedef unsigned int dt_phnd_t;
+
+#define PHND_NULL ((dt_phnd_t)-1)
+
+struct fdt_header {
+ u32_t magic;
+ u32_t totalsize;
+ u32_t off_dt_struct;
+ u32_t off_dt_strings;
+ u32_t off_mem_rsvmap;
+ u32_t version;
+ u32_t last_comp_version;
+ u32_t boot_cpuid_phys;
+ u32_t size_dt_strings;
+ u32_t size_dt_struct;
+};
+
+struct fdt_memrsvd_ent
+{
+ u64_t addr;
+ u64_t size;
+} align(8);
+
+struct fdt_token
+{
+ u32_t token;
+} compact align(4);
+
+struct fdt_node_head
+{
+ struct fdt_token token;
+ char name[0];
+};
+
+struct fdt_prop
+{
+ struct fdt_token token;
+ u32_t len;
+ u32_t nameoff;
+} compact align(4);
+
+struct dt_prop_val
+{
+ struct {
+ union
+ {
+ union {
+ const char* str_val;
+ const char** str_lst;
+ };
+ ptr_t ptr_val;
+
+ union {
+ dt_enc_t encoded;
+ dt_phnd_t phandle;
+ };
+ u32_t u32_val;
+
+ u64_t u64_val;
+ };
+ unsigned int size;
+ };
+};
+
+
+struct dt_prop
+{
+ struct hlist_node ht;
+ struct hstr key;
+ struct dt_prop_val val;
+};
+
+struct dt_node_base
+{
+ union {
+ struct {
+ unsigned char addr_c;
+ unsigned char sz_c;
+ unsigned char intr_c;
+ unsigned char status;
+ };
+ unsigned int _std;
+ };
+
+ union {
+ struct {
+ bool dma_coherent : 1;
+ bool dma_ncoherent : 1;
+ bool intr_controll : 1;
+ unsigned int other : 29;
+ };
+ unsigned int flags;
+ };
+
+ struct dt_node_base *parent;
+ struct llist_header children;
+ struct llist_header siblings;
+ struct llist_header nodes;
+ struct hlist_node phnd_link;
+
+ const char* name;
+
+ struct dt_prop_val compat;
+ const char* model;
+ dt_phnd_t phandle;
+
+ union {
+ struct hbucket other_props[0];
+ struct hbucket _op_bucket[8];
+ };
+};
+
+struct dt_root
+{
+ struct dt_node_base base;
+
+ const char* serial;
+ const char* chassis;
+};
+
+struct dt_intr_prop;
+
+struct dt_intr_node
+{
+ union {
+ struct dt_intr_node *parent;
+ dt_phnd_t parent_hnd;
+ };
+
+ struct {
+ bool extended;
+ union {
+ struct dt_prop_val arr;
+ struct llist_header values;
+ };
+ } intr;
+
+ struct dt_prop_val intr_map;
+ struct dt_prop_val intr_map_mask;
+};
+#define DT_NODE(intr_node) \
+ (container_of(intr_node, struct dt_node, intr))
+
+
+struct dt_node
+{
+ struct dt_node_base base;
+ struct dt_intr_node intr;
+
+ struct dt_prop_val reg;
+ struct dt_prop_val vreg;
+
+ struct dt_prop_val ranges;
+ struct dt_prop_val dma_ranges;
+};
+
+
+struct dt_intr_prop
+{
+ struct dt_intr_node *master;
+
+ struct llist_header props;
+ struct dt_prop_val val;
+};
+
+struct dt_prop_iter
+{
+ struct dt_prop_val *prop;
+ struct dt_node_base *node;
+ dt_enc_t prop_loc;
+ dt_enc_t prop_loc_next;
+ unsigned int ent_sz;
+};
+
+struct dt_context
+{
+ union {
+ ptr_t reloacted_dtb;
+ struct fdt_header* fdt;
+ };
+
+ struct llist_header nodes;
+ struct dt_root *root;
+ struct hbucket phnds_table[16];
+ const char *str_block;
+};
+
+struct fdt_iter
+{
+ union {
+ struct fdt_token *pos;
+ struct fdt_prop *prop;
+ struct fdt_node_head *node_head;
+ };
+
+ const char* str_block;
+ int depth;
+};
+
+struct fdt_memrsvd_iter
+{
+ struct fdt_memrsvd_ent *block;
+};
+
+struct dt_node_iter
+{
+ struct dt_node_base* head;
+ struct dt_node_base* matched;
+ const char *name;
+};
+
+#define dtnode_child_foreach(node_base, pos, n) \
+ llist_for_each(pos, n, &(node_base)->children, siblings)
+
+#define fdt_prop(tok) ((tok)->token == FDT_PROP)
+#define fdt_node(tok) ((tok)->token == FDT_NOD_BEGIN)
+#define fdt_node_end(tok) ((tok)->token == FDT_NOD_END)
+#define fdt_nope(tok) ((tok)->token == FDT_NOP)
+
+void
+fdt_itbegin(struct fdt_iter* fdti, struct fdt_header* fdt_hdr);
+
+void
+fdt_itend(struct fdt_iter* fdti);
+
+bool
+fdt_itnext(struct fdt_iter* fdti);
+
+bool
+fdt_itnext_at(struct fdt_iter* fdti, int level);
+
+void
+fdt_memrsvd_itbegin(struct fdt_memrsvd_iter* rsvdi,
+ struct fdt_header* fdt_hdr);
+
+bool
+fdt_memrsvd_itnext(struct fdt_memrsvd_iter* rsvdi);
+
+void
+fdt_memrsvd_itend(struct fdt_memrsvd_iter* rsvdi);
+
+
+bool
+dt_load(ptr_t dtb_dropoff);
+
+struct dt_node*
+dt_resolve_phandle(dt_phnd_t phandle);
+
+struct dt_prop_val*
+dt_getprop(struct dt_node* node, const char* name);
+
+void
+dt_begin_find(struct dt_node_iter* iter,
+ struct dt_node* node, const char* name);
+
+bool
+dt_find_next(struct dt_node_iter* iter,
+ struct dt_node_base** matched);
+
+static inline bool
+dt_found_any(struct dt_node_iter* iter)
+{
+ return !!iter->matched;
+}
+
+
+static inline char*
+fdtit_prop_key(struct fdt_iter* fdti)
+{
+ return &fdti->str_block[fdti->prop->nameoff];
+}
+
+static inline void
+dt_decode(struct dt_prop_iter* dtpi, struct dt_node_base* node,
+ struct dt_prop_val* val, unsigned int ent_sz)
+{
+ *dtpi = (struct dt_prop_iter) {
+ .prop = val,
+ .node = node,
+ .prop_loc = val->encoded,
+ .prop_loc_next = val->encoded,
+ .ent_sz = ent_sz
+ };
+}
+
+#define dt_decode_reg(dtpi, node, field) \
+ dt_decode(dtpi, &(node)->base, (node)->(field), \
+ (node)->base.sz_c + (node)->base.addr_c);
+
+#define dt_decode_range(dtpi, node, field) \
+ dt_decode(dtpi, &(node)->base, (node)->field, \
+ (node)->base.sz_c * 2 + (node)->base.addr_c);
+
+static inline void
+dt_decode_intrmap(struct dt_prop_iter* dtpi,
+ struct dt_intr_node* intr_node)
+{
+ unsigned int size;
+ struct dt_node* node;
+ struct dt_node_base* base;
+
+ node = DT_NODE(intr_node);
+ base = &node->base;
+ size = (base->addr_c + base->intr_c) * 2 + 1;
+
+ dt_decode(dtpi, base, &intr_node->intr_map, size);
+}
+
+#define dtprop_off(dtpi) \
+ (unsigned int)(\
+ __ptr(dtpi->prop_loc_next) - __ptr(dtpi->prop->encoded) \
+ )
+
+#define dtprop_extract(dtpi, off) \
+ ( (dt_enc_t) (&(dtpi)->prop_loc[(off)]) )
+
+static inline bool
+dtprop_next_n(struct dt_prop_iter* dtpi, int n)
+{
+ unsigned int off;
+
+ dtpi->prop_loc = dtpi->prop_loc_next;
+ dtpi->prop_loc_next += n;
+
+ off = dtprop_off(dtpi);
+ return off >= dtpi->prop->size;
+}
+
+static inline bool
+dtprop_prev_n(struct dt_prop_iter* dtpi, int n)
+{
+ unsigned int off;
+
+ off = dtprop_off(dtpi);
+ if (!off || off > dtpi->prop->size) {
+ return false;
+ }
+
+ dtpi->prop_loc = dtpi->prop_loc_next;
+ dtpi->prop_loc_next -= n;
+
+ return true;
+}
+
+static inline bool
+dtprop_next(struct dt_prop_iter* dtpi)
+{
+ return dtprop_next_n(dtpi, dtpi->ent_sz);
+}
+
+static inline bool
+dtprop_prev(struct dt_prop_iter* dtpi)
+{
+ return dtprop_prev_n(dtpi, dtpi->ent_sz);
+}
+
+static inline unsigned int
+dtprop_to_u32(dt_enc_t enc_val)
+{
+ return le(*enc_val);
+}
+
+#define dtprop_to_phnd(enc_val) \
+ (dt_phnd_t)dtprop_to_u32(enc_val)
+
+static inline u64_t
+dtprop_to_u64(dt_enc_t enc_val)
+{
+ return le64(*(u64_t*)enc_val);
+}
+
+static inline dt_enc_t
+dtprop_reg_addr(struct dt_prop_iter* dtpi)
+{
+ return dtprop_extract(dtpi, 0);
+}
+
+static inline dt_enc_t
+dtprop_reg_len(struct dt_prop_iter* dtpi)
+{
+ return dtprop_extract(dtpi, dtpi->node->addr_c);
+}
+
+static inline dt_enc_t
+dtprop_range_childbus(struct dt_prop_iter* dtpi)
+{
+ return dtprop_extract(dtpi, 0);
+}
+
+static inline dt_enc_t
+dtprop_range_parentbus(struct dt_prop_iter* dtpi)
+{
+ return dtprop_extract(dtpi, dtpi->node->addr_c);
+}
+
+static inline dt_enc_t
+dtprop_range_len(struct dt_prop_iter* dtpi)
+{
+ return dtprop_extract(dtpi, dtpi->node->addr_c * 2);
+}
+
+static inline dt_enc_t
+dtprop_intr_cuaddr(struct dt_prop_iter* dtpi)
+{
+ return dtprop_extract(dtpi, 0);
+}
+
+static inline dt_enc_t
+dtprop_intr_cispec(struct dt_prop_iter* dtpi)
+{
+ return dtprop_extract(dtpi, dtpi->node->addr_c);
+}
+
+static inline struct dt_intr_node*
+dtprop_intr_parent(struct dt_prop_iter* dtpi)
+{
+ unsigned off;
+ struct dt_node* node;
+ dt_enc_t enc_val;
+
+ off = dtpi->node->addr_c + dtpi->node->intr_c;
+ enc_val = dtprop_extract(dtpi, off);
+ node = dt_resolve_phandle(dtprop_to_phnd(enc_val));
+
+ return &node->intr;
+}
+
+static inline dt_enc_t
+dtprop_intr_puaddr(struct dt_prop_iter* dtpi)
+{
+ unsigned off;
+
+ off = dtpi->node->addr_c + dtpi->node->intr_c + 1;
+ return dtprop_extract(dtpi, off);
+}
+
+static inline dt_enc_t
+dtprop_intr_pispec(struct dt_prop_iter* dtpi)
+{
+ unsigned off;
+
+ off = dtpi->node->addr_c * 2 + dtpi->node->intr_c + 1;
+ return dtprop_extract(dtpi, off);
+}
+
+#endif /* __LUNAIX_DEVTREE_H */
struct
{
- ptr_t ksections;
- size_t size;
+ struct {
+ char* cmdline;
+ size_t len;
+ };
- char* cmdline;
- size_t len;
+ ptr_t dtb_pa;
} kexec;
struct
struct hstr
{
- u32_t hash;
- u32_t len;
+ unsigned int hash;
+ unsigned int len;
const char* value;
};
mntops_umnt unmount;
};
+struct fs_iter
+{
+ struct filesystem* fs;
+};
+
struct v_superblock
{
struct llist_header sb_list;
struct filesystem*
fsm_get(const char* fs_name);
+void
+fsm_itbegin(struct fs_iter* iterator);
+
+bool
+fsm_itnext(struct fs_iter* iterator);
+
+static inline void
+fsm_itend(struct fs_iter* iterator)
+{
+ iterator->fs = NULL;
+}
+
void
vfs_init();
+++ /dev/null
-#ifndef __LUNAIX_PROBE_BOOT_H
-#define __LUNAIX_PROBE_BOOT_H
-
-#include <lunaix/device.h>
-
-struct device*
-probe_boot_medium();
-
-#endif /* __LUNAIX_PROBE_BOOT_H */
--- /dev/null
+#ifndef __LUNAIX_BOOTMEM_H
+#define __LUNAIX_BOOTMEM_H
+
+#include <lunaix/types.h>
+
+/*
+ * bootmem:
+ *
+ * Architecture-defined memory manager during boot stage.
+ *
+ * It provide basic memory service before kernel's mm
+ * context is avaliable. As it's name stated, this is
+ * particularly useful for allocating temporary memory
+ * to get essential things done in the boot stage.
+ *
+ * Implementation detail is not enforced by Lunaix, but it
+ * is recommend that such memory pool should be reclaimed
+ * after somewhere as earlier as possible (should not later
+ * than the first process spawning)
+ *
+ */
+
+void*
+bootmem_alloc(unsigned int size);
+
+void
+bootmem_free(void* ptr);
+
+#endif /* __LUNAIX_BOOTMEM_H */
#include <sys/cpu.h>
#include <lunaix/process.h>
-#define _preemptible \
- __attribute__((section(".kf.preempt"))) no_inline
-
-#define ensure_preempt_caller() \
- do { \
- extern int __kf_preempt_start[]; \
- extern int __kf_preempt_end[]; \
- ptr_t _retaddr = abi_get_retaddr(); \
- assert_msg((ptr_t)__kf_preempt_start <= _retaddr \
- && _retaddr < (ptr_t)__kf_preempt_end, \
- "caller must be kernel preemptible"); \
- } while(0)
-
static inline void
set_preemption()
{
}
static inline bool
-l0tep_impile_vmnts(pte_t* ptep)
+l0tep_implie_vmnts(pte_t* ptep)
{
return l0tep_implie(ptep, VMS_SELF) ||
l0tep_implie(ptep, VMS_MOUNT_1);
struct mm_region*
region_dup(struct mm_region* origin);
-static inline pte_attr_t
-region_pteprot(struct mm_region* vmr)
+static inline pte_t
+region_tweakpte(struct mm_region* vmr, pte_t pte)
{
- return translate_vmr_prot(vmr->attr);
+ return translate_vmr_prot(vmr->attr, pte);
}
#endif /* __LUNAIX_REGION_H */
#define PS_GrDT (PS_TERMNAT | PS_DESTROY)
#define PS_Rn (PS_RUNNING | PS_CREATED)
-#define proc_terminated(proc) (((proc)->state) & PS_GrDT)
-#define proc_hanged(proc) (((proc)->state) & PS_BLOCKED)
-#define proc_runnable(proc) (!(proc)->state || !(((proc)->state) & ~PS_Rn))
+#define proc_terminated(proc) \
+ (!(proc) || ((proc)->state) & PS_GrDT)
+#define proc_hanged(proc) \
+ ((proc) && ((proc)->state) & PS_BLOCKED)
+#define proc_runnable(proc) \
+ ((proc) && (!(proc)->state || !(((proc)->state) & ~PS_Rn)))
#define TH_DETACHED 0b00000001
assert(th);
start_thread(th, entry);
+ detach_thread(th);
}
void
--- /dev/null
+#ifndef __LUNAIX_SECTIONS_H
+#define __LUNAIX_SECTIONS_H
+
+#include <lunaix/types.h>
+
+#define __mark_name(n, s) __##n##_##s
+#define __section_mark(name, suffix) \
+ ({ extern unsigned long __mark_name(name,suffix)[]; \
+ __ptr(__mark_name(name,suffix)); })
+
+
+/* Auto-generated data */
+
+#define extern_autogen(name) \
+ weak unsigned long __mark_name(autogen,name)[] = {}; \
+ extern unsigned long __mark_name(autogen,name)[];
+
+#define autogen_name(name) __mark_name(autogen,name)
+
+#define autogen(type, name) \
+ ((type*)__mark_name(autogen,name))
+
+
+/* Common section definitions */
+
+#define reclaimable __section(".bss.reclaim")
+#define reclaimable_start __section_mark(bssreclaim, start)
+#define reclaimable_end __section_mark(bssreclaim, end)
+
+#define kernel_start __section_mark(kexec, start)
+#define kernel_load_end __section_mark(kexec, end)
+#define kernel_end __section_mark(kimg, end)
+
+#ifdef CONFIG_USE_DEVICETREE
+#define dtb_start __section_mark(dtb, start)
+#endif
+
+
+/* kernel section mapping info */
+
+struct ksection
+{
+ ptr_t va;
+ ptr_t pa;
+ unsigned int size;
+ unsigned int flags;
+} align(4);
+
+struct ksecmap
+{
+ unsigned int num;
+ unsigned int ksize;
+ struct ksection secs[0];
+} align(4);
+
+#endif /* __LUNAIX_SECTIONS_H */
})
#define offset(data, off) \
- ((void*)(__ptr(data) + (off)))
+ ((typeof(data))(__ptr(data) + (off)))
+
+#define offset_t(data, type, off) \
+ ((type*)(__ptr(data) + (off)))
#define __ptr(val) ((ptr_t)(val))
config_h += -include $(lbuild_config_h)
tmp_kbin := $(BUILD_DIR)/tmpk.bin
-ksymtable := lunaix_ksyms.o
klinking := link/lunaix.ld
CFLAGS += $(khdr_opts) $(kinc_opts) $(config_h) -MMD -MP
@$(CC) -T $(klinking) $(config_h) $(LDFLAGS) -o $@ \
$(call all_linkable,$^)
+ksymtable := lunaix_ksyms.o
+ksecsmap := lunaix_ksecsmap.o
+
+kautogen := $(ksecsmap) $(ksymtable)
$(ksymtable): $(tmp_kbin)
$(call status_,KSYM,$@)
- @ARCH=$(ARCH) scripts/gen_ksymtable.sh DdRrTtAGg $< > lunaix_ksymtable.S
+ @ARCH=$(ARCH) scripts/gen-ksymtable DdRrTtAGg $< > lunaix_ksymtable.S
@$(CC) $(CFLAGS) -c lunaix_ksymtable.S -o $@
+$(ksecsmap): $(tmp_kbin)
+ $(call status_,KGEN,$@)
+ @scripts/elftool.tool -p -i $< > lunaix_ksecsmap.S
+
+ @$(CC) $(CFLAGS) -c lunaix_ksecsmap.S -o $@
.PHONY: __do_relink
-__do_relink: $(klinking) $(ksrc_objs) $(ksymtable)
+
+__do_relink: $(klinking) $(ksrc_objs) $(kautogen)
$(call status_,LD,$(kbin))
@$(CC) -T $(klinking) $(config_h) $(LDFLAGS) -o $(kbin) \
#include <lunaix/mm/vmm.h>
#include <lunaix/spike.h>
#include <lunaix/kcmd.h>
+#include <lunaix/sections.h>
#include <sys/mm/mm_defs.h>
-extern unsigned char __kexec_end[], __kexec_start[];
-
/**
* @brief Reserve memory for kernel bootstrapping initialization
*
boot_begin_arch_reserve(bhctx);
// 将内核占据的页,包括前1MB,hhk_init 设为已占用
- size_t pg_count = leaf_count(to_kphysical(__kexec_end));
+ size_t pg_count = leaf_count(to_kphysical(kernel_load_end));
pmm_onhold_range(0, pg_count);
size_t i;
}
}
-extern u8_t __kboot_end; /* link/linker.ld */
+static void
+__free_reclaimable()
+{
+ ptr_t start;
+ pfn_t pgs;
+ pte_t* ptep;
+
+ start = reclaimable_start;
+ pgs = leaf_count(reclaimable_end - start);
+ ptep = mkptep_va(VMS_SELF, start);
+
+ pmm_unhold_range(pfn(to_kphysical(start)), pgs);
+ vmm_unset_ptes(ptep, pgs);
+}
/**
* @brief Release memory for kernel bootstrapping initialization
bhctx->release(bhctx);
boot_clean_arch_reserve(bhctx);
+
+ __free_reclaimable();
}
void
#include <lunaix/spike.h>
#include <lunaix/syslog.h>
#include <lunaix/trace.h>
+#include <lunaix/sections.h>
#include <sys/abi.h>
#include <sys/mm/mm_defs.h>
LOG_MODULE("TRACE")
-weak struct ksyms __lunaix_ksymtable[] = { };
-extern struct ksyms __lunaix_ksymtable[];
+extern_autogen(ksymtable);
static struct trace_context trace_ctx;
void
trace_modksyms_init(struct boot_handoff* bhctx)
{
- trace_ctx.ksym_table = __lunaix_ksymtable;
+ trace_ctx.ksym_table = autogen(struct ksyms, ksymtable);
}
struct ksym_entry*
if (mutex->owner != pid || !atomic_load(&mutex->lk)) {
return;
}
- __mutext_unlock(mutex);
+ atomic_fetch_sub(&mutex->lk, 1);
}
void
"path_walk.c",
"fsm.c",
"fs_export.c",
- "probe_boot.c"
])
return fs;
}
+void
+fsm_itbegin(struct fs_iter* iterator)
+{
+ iterator->fs = list_entry(&fs_flatlist, struct filesystem, fs_flat);
+}
+
+bool
+fsm_itnext(struct fs_iter* iterator)
+{
+ iterator->fs = list_next(iterator->fs, struct filesystem, fs_flat);
+ return &iterator->fs->fs_flat != &fs_flatlist;
+}
+
static void
read_fslist(struct twimap *mapping)
{
}
llist_init_head(&mnt->submnts);
+ llist_init_head(&mnt->sibmnts);
llist_append(&all_mnts, &mnt->list);
mutex_init(&mnt->lock);
// detached the inodes from cache, and let lru policy to recycle them
for (size_t i = 0; i < VFS_HASHTABLE_SIZE; i++) {
struct hbucket* bucket = &sb->i_cache[i];
- if (!bucket) {
+ if (!bucket->head) {
continue;
}
bucket->head->pprev = 0;
return errno;
}
-int
-vfs_mount_at(const char* fs_name,
- struct device* device,
- struct v_dnode* mnt_point,
- int options)
+static int
+vfs_mount_fsat(struct filesystem* fs,
+ struct device* device,
+ struct v_dnode* mnt_point,
+ int options)
{
+
if (device && device->dev_type != DEV_IFVOL) {
return ENOTBLK;
}
return ENOTDIR;
}
- struct filesystem* fs = fsm_get(fs_name);
- if (!fs) {
- return ENODEV;
- }
-
if ((fs->types & FSTYPE_ROFS)) {
options |= MNT_RO;
}
int errno = 0;
char* dev_name = "sys";
+ char* fsname = HSTR_VAL(fs->fs_name);
+
struct v_mount* parent_mnt = mnt_point->mnt;
struct v_superblock *sb = vfs_sb_alloc(),
*old_sb = mnt_point->super_block;
mnt_point->mnt->flags = options;
if (!(errno = fs->mount(sb, mnt_point))) {
- kprintf("mount: dev=%s, fs=%s, mode=%d", dev_name, fs_name, options);
+ kprintf("mount: dev=%s, fs=%s, mode=%d",
+ dev_name, fsname, options);
} else {
goto cleanup;
}
cleanup:
ERROR("failed mount: dev=%s, fs=%s, mode=%d, err=%d",
- dev_name,
- fs_name,
- options,
- errno);
+ dev_name, fsname, options, errno);
+
vfs_d_assign_sb(mnt_point, old_sb);
vfs_sb_free(sb);
__vfs_release_vmnt(mnt_point->mnt);
+ mnt_point->mnt = parent_mnt;
+
+ return errno;
+}
+
+int
+vfs_mount_at(const char* fs_name,
+ struct device* device,
+ struct v_dnode* mnt_point,
+ int options)
+{
+ if (fs_name) {
+ struct filesystem* fs = fsm_get(fs_name);
+ if (!fs) {
+ return ENODEV;
+ }
+
+ return vfs_mount_fsat(fs, device, mnt_point, options);
+ }
+
+ int errno = ENODEV;
+ struct fs_iter fsi;
+
+ fsm_itbegin(&fsi);
+ while (fsm_itnext(&fsi))
+ {
+ if ((fsi.fs->types & FSTYPE_PSEUDO)) {
+ continue;
+ }
+
+ INFO("mount attempt: %s", HSTR_VAL(fsi.fs->fs_name));
+ errno = vfs_mount_fsat(fsi.fs, device, mnt_point, options);
+ if (!errno) {
+ break;
+ }
+ }
+
return errno;
}
+++ /dev/null
-#include <lunaix/fs/probe_boot.h>
-#include <lunaix/mm/valloc.h>
-#include <lunaix/syslog.h>
-
-#include "iso9660/iso9660.h"
-
-LOG_MODULE("PROBE")
-
-#define LUNAIX_ID 0x414e554cUL // "LUNA"
-
-struct device*
-probe_boot_medium()
-{
- struct device_meta* block_cat = device_getbyname(NULL, "block", 5);
- if (!block_cat) {
- return NULL;
- }
-
- struct iso_vol_primary* volp = valloc(ISO9660_BLKSZ);
-
- struct device* dev = NULL;
- struct device_meta *pos, *n;
- llist_for_each(pos, n, &block_cat->children, siblings)
- {
- dev = resolve_device(pos);
- if (!dev) {
- continue;
- }
-
- int errno =
- dev->ops.read(dev, (void*)volp, ISO9660_READ_OFF, ISO9660_BLKSZ);
- if (errno < 0) {
- kprintf(KINFO "failed %xh:%xh, /dev/%s",
- dev->ident.fn_grp,
- dev->ident.unique,
- dev->name.value);
- dev = NULL;
- goto done;
- }
-
- if (*(u32_t*)volp->header.std_id != ISO_SIGNATURE_LO) {
- continue;
- }
-
- if (*(u32_t*)volp->sys_id == LUNAIX_ID) {
- kprintf(KINFO "%xh:%xh, /dev/%s, %s",
- dev->ident.fn_grp,
- dev->ident.unique,
- dev->name.value,
- (char*)volp->vol_id);
- goto done;
- }
- }
-
- return NULL;
-
-done:
- vfree(volp);
- return dev;
-}
\ No newline at end of file
{
struct v_inode* inode;
int errno = 0;
-
- if (file->ref_count > 1) {
- atomic_fetch_sub(&file->ref_count, 1);
- return 0;
- }
-
+
inode = file->inode;
/*
* process is writing to this file later after B exit.
*/
- if (mutex_on_hold(&inode->lock)) {
- mutex_unlock_for(&inode->lock, pid);
+ mutex_unlock_for(&inode->lock, pid);
+
+ if (file->ref_count > 1) {
+ atomic_fetch_sub(&file->ref_count, 1);
+ return 0;
}
- lock_inode(inode);
-
- pcache_commit_all(inode);
if ((errno = file->ops->close(file))) {
- goto unlock;
+ goto done;
}
atomic_fetch_sub(&file->dnode->ref_count, 1);
+ mnt_chillax(file->dnode->mnt);
+ cake_release(file_pile, file);
+
+ /*
+ if the current inode is not being locked by other
+ threads that does not share same open context,
+ then we can try to do sync opportunistically
+ */
+ if (mutex_on_hold(&inode->lock)) {
+ goto done;
+ }
+
+ lock_inode(inode);
+
+ pcache_commit_all(inode);
inode->open_count--;
if (!inode->open_count) {
__sync_inode_nolock(inode);
}
- mnt_chillax(file->dnode->mnt);
- cake_release(file_pile, file);
-
-unlock:
unlock_inode(inode);
+
+done:
return errno;
}
#include <lunaix/boot_generic.h>
#include <lunaix/device.h>
#include <lunaix/foptions.h>
-#include <lunaix/fs/twifs.h>
#include <lunaix/input.h>
#include <lunaix/mm/cake.h>
-#include <lunaix/mm/mmio.h>
#include <lunaix/mm/pmm.h>
+#include <lunaix/mm/page.h>
#include <lunaix/mm/valloc.h>
#include <lunaix/mm/vmm.h>
#include <lunaix/process.h>
#include <lunaix/sched.h>
#include <lunaix/spike.h>
#include <lunaix/trace.h>
-#include <lunaix/tty/tty.h>
#include <lunaix/owloysius.h>
#include <lunaix/hart_state.h>
+#include <lunaix/syslog.h>
+#include <lunaix/sections.h>
#include <hal/acpi/acpi.h>
+#include <hal/devtree.h>
#include <sys/abi.h>
#include <sys/mm/mm_defs.h>
#include <klibc/strfmt.h>
#include <klibc/string.h>
-void
-spawn_lunad();
+LOG_MODULE("kinit")
-void
-kmem_init(struct boot_handoff* bhctx);
+extern void
+lunad_main();
+
+/**
+ * @brief 创建并运行Lunaix守护进程
+ *
+ */
+static void
+spawn_lunad()
+{
+ int has_error;
+ struct thread* kthread;
+
+ has_error = spawn_process(&kthread, (ptr_t)lunad_main, false);
+ assert_msg(!has_error, "failed to spawn lunad");
+
+ run(kthread);
+
+ fail("Unexpected Return");
+}
+
+static void
+kmem_init(struct boot_handoff* bhctx)
+{
+ pte_t* ptep = mkptep_va(VMS_SELF, KERNEL_RESIDENT);
+
+ ptep = mkl0tep(ptep);
+
+ unsigned int i = ptep_vfn(ptep);
+ do {
+ if (l0tep_implie_vmnts(ptep)) {
+ ptep++;
+ continue;
+ }
+
+#if LnT_ENABLED(1)
+ assert(mkl1t(ptep++, 0, KERNEL_PGTAB));
+#elif LnT_ENABLED(2)
+ assert(mkl2t(ptep++, 0, KERNEL_PGTAB));
+#elif LnT_ENABLED(3)
+ assert(mkl3t(ptep++, 0, KERNEL_PGTAB));
+#else
+ assert(mklft(ptep++, 0, KERNEL_PGTAB));
+#endif
+ } while (++i < MAX_PTEN);
+
+ // allocators
+ cake_init();
+ valloc_init();
+}
+
+static void
+__remap_and_load_dtb(struct boot_handoff* bhctx)
+{
+#ifdef CONFIG_USE_DEVICETREE
+ ptr_t dtb = bhctx->kexec.dtb_pa;
+
+ if (!dtb) {
+ return;
+ }
+
+ if (va_offset(dtb)) {
+ WARN("bad-aligned dtb location, expect page aligned");
+ return;
+ }
+
+ pte_t *ptep, pte;
+ size_t nr_pages;
+ bool loaded;
+
+ pte = mkpte(dtb, KERNEL_DATA);
+ ptep = mkptep_va(VMS_SELF, dtb_start);
+ nr_pages = leaf_count(CONFIG_DTB_MAXSIZE);
+
+ pmm_onhold_range(dtb, nr_pages);
+ vmm_set_ptes_contig(ptep, pte, PAGE_SIZE, nr_pages);
+
+ loaded = dt_load(dtb_start);
+ if (!loaded) {
+ ERROR("dtb load failed");
+ }
+#endif
+
+ return;
+}
void
kernel_bootstrap(struct boot_handoff* bhctx)
/* Setup kernel memory layout and services */
kmem_init(bhctx);
+ __remap_and_load_dtb(bhctx);
+
boot_parse_cmdline(bhctx);
/* Prepare stack trace environment */
invoke_init_function(on_boot);
- must_success(vfs_unmount("/dev"));
-
/* Finish up bootstrapping sequence, we are ready to spawn the root process
* and start geting into uspace
*/
spawn_lunad();
}
-extern void
-lunad_main();
-
-/**
- * @brief 创建并运行Lunaix守护进程
- *
- */
-void
-spawn_lunad()
-{
- int has_error;
- struct thread* kthread;
-
- has_error = spawn_process(&kthread, (ptr_t)lunad_main, false);
- assert_msg(!has_error, "failed to spawn lunad");
-
- run(kthread);
-
- fail("Unexpected Return");
-}
-
-void
-kmem_init(struct boot_handoff* bhctx)
-{
- pte_t* ptep = mkptep_va(VMS_SELF, KERNEL_RESIDENT);
-
- ptep = mkl0tep(ptep);
-
- unsigned int i = ptep_vfn(ptep);
- do {
- if (l0tep_impile_vmnts(ptep)) {
- ptep++;
- continue;
- }
-
-#if LnT_ENABLED(1)
- assert(mkl1t(ptep++, 0, KERNEL_DATA));
-#elif LnT_ENABLED(2)
- assert(mkl2t(ptep++, 0, KERNEL_DATA));
-#elif LnT_ENABLED(3)
- assert(mkl3t(ptep++, 0, KERNEL_DATA));
-#else
- assert(mklft(ptep++, 0, KERNEL_DATA));
-#endif
- } while (++i < MAX_PTEN);
-
- // allocators
- cake_init();
- valloc_init();
-}
#include <lunaix/exec.h>
#include <lunaix/foptions.h>
#include <lunaix/fs.h>
-#include <lunaix/fs/probe_boot.h>
#include <lunaix/fs/twifs.h>
#include <lunaix/spike.h>
#include <lunaix/syslog.h>
#include <lunaix/owloysius.h>
#include <lunaix/sched.h>
#include <lunaix/kpreempt.h>
+#include <lunaix/kcmd.h>
#include <klibc/string.h>
int
mount_bootmedium()
{
- struct v_dnode* dnode;
int errno = 0;
- struct device* dev = probe_boot_medium();
+ char* rootfs;
+ struct v_dnode* dn;
+ struct device* dev;
+
+ if (!kcmd_get_option("rootfs", &rootfs)) {
+ ERROR("no rootfs.");
+ return 0;
+ }
+
+ if ((errno = vfs_walk(NULL, rootfs, &dn, NULL, 0))) {
+ ERROR("%s: no such file (%d)", rootfs, errno);
+ return 0;
+ }
+
+ dev = resolve_device(dn->inode->data);
if (!dev) {
- ERROR("fail to acquire device. (%d)", errno);
+ ERROR("%s: not a device", rootfs);
return 0;
}
- if ((errno = vfs_mount("/mnt/lunaix-os", "iso9660", dev, 0))) {
- ERROR("fail to mount boot medium. (%d)", errno);
+ // unmount the /dev to put old root fs in clear
+ must_success(vfs_unmount("/dev"));
+
+ // re-mount the root fs with our device.
+ if ((errno = vfs_mount_root(NULL, dev))) {
+ ERROR("mount root failed: %s (%d)", rootfs, errno);
return 0;
}
exec_initd()
{
int errno = 0;
- const char* argv[] = { "/mnt/lunaix-os/usr/bin/init", 0 };
+ const char* argv[] = { "/init", 0 };
const char* envp[] = { 0 };
+ kcmd_get_option("init", (char**)&argv[0]);
+
if ((errno = exec_kexecve(argv[0], argv, envp))) {
goto fail;
}
// No, these are not preemptive
no_preemption();
- if (!mount_bootmedium() || !exec_initd()) {
+ if (!exec_initd()) {
fail("failed to initd");
}
}
* 同时,该进程也负责fork出我们的init进程。
*
*/
-void _preemptible
+void
lunad_main()
{
spawn_kthread((ptr_t)init_platform);
void
init_platform()
-{
+{
device_postboot_load();
invoke_init_function(on_postboot);
twifs_register_plugins();
+ if (!mount_bootmedium()) {
+ ERROR("failed to boot");
+ goto exit;
+ }
+
// FIXME Re-design needed!!
// sdbg_init();
assert(!spawn_process(NULL, (ptr_t)lunad_do_usr, true));
+exit:
exit_thread(NULL);
}
\ No newline at end of file
// for a ptep fault, the parent page tables should match the actual
// accesser permission
if (kernel_refaddr) {
- ptep_alloc_hierarchy(fault_ptep, fault_va, KERNEL_DATA);
+ ptep_alloc_hierarchy(fault_ptep, fault_va, KERNEL_PGTAB);
} else {
- ptep_alloc_hierarchy(fault_ptep, fault_va, USER_DATA);
+ ptep_alloc_hierarchy(fault_ptep, fault_va, USER_PGTAB);
}
fault->fault_pte = fault_pte;
+
+ if (fault->ptep_fault) {
+ // fault on intermediate levels.
+ fault_pte = pte_setprot(fault_pte, KERNEL_PGTAB);
+ }
- if (fault->ptep_fault && !kernel_refaddr) {
- fault->resolving = pte_setprot(fault_pte, USER_DATA);
- } else {
- fault->resolving = pte_setprot(fault_pte, KERNEL_DATA);
+ if (!kernel_refaddr) {
+ fault_pte = pte_mkuser(fault_pte);
}
- fault->resolving = pte_mkloaded(fault->resolving);
+ fault->resolving = pte_mkloaded(fault_pte);
fault->kernel_vmfault = kernel_vmfault;
fault->kernel_access = kernel_context(fault->hstate);
__handle_anon_region(struct fault_context* fault)
{
pte_t pte = fault->resolving;
- pte_attr_t prot = region_pteprot(fault->vmr);
- pte = pte_setprot(pte, prot);
+ pte = region_tweakpte(fault->vmr, pte);
// TODO Potentially we can get different order of leaflet here
struct leaflet* region_part = alloc_leaflet(0);
// TODO Potentially we can get different order of leaflet here
struct leaflet* region_part = alloc_leaflet(0);
- pte = pte_setprot(pte, region_pteprot(vmr));
+ pte = region_tweakpte(vmr, pte);
ptep_map_leaflet(fault->fault_ptep, pte, region_part);
if (mseg_off < mapped_len) {
}
bool
-pmm_allocator_trymark_onhold(struct pmem_pool* pool, struct ppage* start, struct ppage* end)
+pmm_allocator_trymark_onhold(struct pmem_pool* pool,
+ struct ppage* start, struct ppage* end)
{
while (start <= end) {
if (__uninitialized_page(start)) {
}
bool
-pmm_allocator_trymark_unhold(struct pmem_pool* pool, struct ppage* start, struct ppage* end)
+pmm_allocator_trymark_unhold(struct pmem_pool* pool,
+ struct ppage* start, struct ppage* end)
{
while (start <= end) {
if (!__uninitialized_page(start) && reserved_page(start)) {
*/
pte_t* ptep_ssm = mkl0tep_va(VMS_SELF, dest_mnt);
pte_t* ptep_sms = mkl1tep_va(VMS_SELF, dest_mnt) + VMS_SELF_L0TI;
- pte_t pte_sms = mkpte_prot(KERNEL_DATA);
+ pte_t pte_sms = mkpte_prot(KERNEL_PGTAB);
pte_sms = alloc_kpage_at(ptep_ssm, pte_sms, 0);
set_pte(ptep_sms, pte_sms);
while (i++ < MAX_PTEN) {
pte_t pte = *ptep;
- if (l0tep_impile_vmnts(ptep)) {
+ if (l0tep_implie_vmnts(ptep)) {
goto _cont;
}
void
procvm_mount(struct proc_mm* mm)
{
+ // if current mm is already active
+ if (active_vms(mm->vm_mnt)) {
+ return;
+ }
+
+ // we are double mounting
assert(!mm->vm_mnt);
assert(mm->vmroot);
void
procvm_unmount(struct proc_mm* mm)
{
+ if (active_vms(mm->vm_mnt)) {
+ return;
+ }
+
assert(mm->vm_mnt);
-
vms_unmount(VMS_MOUNT_1);
+
struct proc_mm* mm_current = vmspace(__current);
if (mm_current) {
mm_current->guest_mm = NULL;
procvm_mount_self(struct proc_mm* mm)
{
assert(!mm->vm_mnt);
- assert(!mm->guest_mm);
mm->vm_mnt = VMS_SELF;
}
pte_t* rptep = mkptep_va(vm_mnt, remote_base);
pte_t* lptep = mkptep_va(VMS_SELF, rvmctx->local_mnt);
- unsigned int pattr = region_pteprot(region);
+
+ pte_t pte, rpte = null_pte;
+ rpte = region_tweakpte(region, rpte);
for (size_t i = 0; i < size_pn; i++)
{
- pte_t pte = vmm_tryptep(rptep, PAGE_SIZE);
+ pte = vmm_tryptep(rptep, PAGE_SIZE);
if (pte_isloaded(pte)) {
set_pte(lptep, pte);
continue;
ptr_t pa = ppage_addr(pmm_alloc_normal(0));
set_pte(lptep, mkpte(pa, KERNEL_DATA));
- set_pte(rptep, mkpte(pa, pattr));
+ set_pte(rptep, pte_setpaddr(rpte, pa));
}
return vm_mnt;
assert(vms_root);
pte_t* ptep = mkl0tep_va(VMS_SELF, mnt);
- set_pte(ptep, mkpte(vms_root, KERNEL_DATA));
+ set_pte(ptep, mkpte(vms_root, KERNEL_PGTAB));
tlb_flush_kernel(mnt);
return mnt;
}
clean-up on these thread, in the preemptible kernel thread.
*/
-void _preemptible
-cleanup_detached_threads() {
- ensure_preempt_caller();
-
+void
+cleanup_detached_threads()
+{
// XXX may be a lock on sched_context will ben the most appropriate?
cpu_disable_interrupt();
pid_t
destroy_process(pid_t pid)
-{
+{
int index = pid;
if (index <= 0 || index > sched_ctx.ptable_len) {
syscall_result(EINVAL);
static void
terminate_proc_only(struct proc_info* proc, int exit_code) {
+ assert(proc->pid != 0);
+
proc->state = PS_TERMNAT;
proc->exit_code = exit_code;
terminate_proc_only(proc, exit_code);
struct thread *pos, *n;
- llist_for_each(pos, n, &__current->threads, proc_sibs) {
+ llist_for_each(pos, n, &proc->threads, proc_sibs) {
pos->state = PS_TERMNAT;
}
}
terminate_current(caused_by | PEXITSIG);
}
+static inline void
+signal_terminate_proc(struct proc_info* pcb, int caused_by)
+{
+ terminate_proccess(pcb, caused_by | PEXITSIG);
+}
+
// Referenced in kernel/asm/x86/interrupt.S
void
signal_dispatch(struct signpost_result* result)
switch (signum)
{
case SIGKILL:
- signal_terminate(signum);
+ signal_terminate_proc(proc, signum);
break;
case SIGCONT:
case SIGSTOP:
__set_signal(proc->th_active, signum);
}
+static inline void
+__broadcast_group(struct proc_info* proc, signum_t signum)
+{
+ if (proc_terminated(proc)) {
+ return;
+ }
+
+ struct proc_info *pos, *n;
+ llist_for_each(pos, n, &proc->grp_member, grp_member)
+ {
+ proc_setsignal(pos, signum);
+ }
+}
+
int
signal_send(pid_t pid, signum_t signum)
{
if (pid > 0) {
proc = get_process(pid);
- goto send_single;
} else if (!pid) {
proc = __current;
- goto send_grp;
} else if (pid < 0) {
proc = get_process(-pid);
- goto send_grp;
+ __broadcast_group(proc, signum);
} else {
// TODO: send to all process.
// But I don't want to support it yet.
return EINVAL;
}
-send_grp: ;
- struct proc_info *pos, *n;
- llist_for_each(pos, n, &proc->grp_member, grp_member)
- {
- proc_setsignal(pos, signum);
- }
-
-send_single:
if (proc_terminated(proc)) {
return EINVAL;
}
-.text BLOCK(PAGE_GRAN) : AT ( ADDR(.text) - KEXEC_BASE )
+.text BLOCK(PAGE_GRAN)
+ : AT ( ADDR(.text) - KEXEC_BASE )
{
*(.text)
-}
+} : text
-.kf.preempt BLOCK(PAGE_GRAN) : AT ( ADDR(.kf.preempt) - KEXEC_BASE )
-{
- PROVIDE(__kf_preempt_start = .);
-
- KEEP(*(.kf.preempt));
-
- PROVIDE(__kf_preempt_end = .);
-}
-
-PROVIDE(__kexec_text_end = .);
-
-.data BLOCK(PAGE_GRAN) : AT ( ADDR(.data) - KEXEC_BASE )
+.data BLOCK(PAGE_GRAN)
+ : AT ( ADDR(.data) - KEXEC_BASE )
{
*(.data)
-}
+} : data
-.rodata BLOCK(PAGE_GRAN) : AT ( ADDR(.rodata) - KEXEC_BASE )
+.rodata BLOCK(PAGE_GRAN)
+ : AT ( ADDR(.rodata) - KEXEC_BASE )
{
*(.rodata)
*(.rodata.*)
-}
-
-.kpg BLOCK(PAGE_GRAN) : AT ( ADDR(.kpg) - KEXEC_BASE )
-{
- *(.kpg)
-}
\ No newline at end of file
+} : rodata
\ No newline at end of file
#include "base.ldx"
-.lga BLOCK(PAGE_GRAN) : AT ( ADDR(.lga) - KEXEC_BASE )
+.lga BLOCK(PAGE_GRAN)
+ : AT ( ADDR(.lga) - KEXEC_BASE )
{
PROVIDE(__lga_twiplugin_inits_start = .);
KEEP(*(.lga.lunainit.c_postboot));
PROVIDE(__lga_lunainit_on_postboot_end = .);
-}
\ No newline at end of file
+} : rodata
\ No newline at end of file
ENTRY(ENTRY_POINT)
+PHDRS
+{
+ boot_text PT_LOAD;
+ boot_data PT_LOAD;
+
+ text PT_LOAD;
+ data PT_LOAD;
+ rodata PT_LOAD FLAGS(4);
+
+ var PT_LOAD;
+}
+
SECTIONS {
. = LOAD_OFF;
+ __kload_start = ALIGN(PAGE_GRAN);
+
+
+ /* ---- boot start ---- */
+ __kboot_start = .;
+
#include <linking/boot_secs.ldx>
+ __kboot_end = ALIGN(PAGE_GRAN);
+
+
/* ---- kernel start ---- */
. += KEXEC_BASE;
- PROVIDE(__kexec_text_start = ALIGN(PAGE_GRAN));
+ PROVIDE(__kexec_text_start = ALIGN(PAGE_GRAN));
__kexec_start = ALIGN(PAGE_GRAN);
-
-
+
/* kernel executable sections */
#include "kernel.ldx"
-
/* link-time allocated array */
#include "lga.ldx"
+ /*
+ All the auto-generated stuff and uninitialized data
+ must be a member of `var` segment
+ */
+
+ .autogen BLOCK(PAGE_GRAN)
+ : AT ( ADDR(.autogen) - KEXEC_BASE )
+ {
+ KEEP(*(.autogen.*))
+ } : var
+
+ /*
+ End of loadable regions.
+ This fake section is used to correct address
+ calculation
+ */
+
+ .__load_end : {
+ __kload_end = ALIGN(PAGE_GRAN) - KEXEC_BASE;
+ } : var
- /* All other stuff */
- .ksymtable BLOCK(PAGE_GRAN) : AT ( ADDR(.ksymtable) - KEXEC_BASE )
+ .kpg BLOCK(PAGE_GRAN)
+ : AT ( ADDR(.kpg) - KEXEC_BASE )
{
- *(.ksymtable)
- }
+ KEEP(*(.kpg))
+ } : var
- .bss BLOCK(PAGE_GRAN) : AT ( ADDR(.bss) - KEXEC_BASE )
+ .bss BLOCK(PAGE_GRAN)
+ : AT ( ADDR(.bss) - KEXEC_BASE )
{
*(.bss)
- }
- .bss.kstack BLOCK(PAGE_GRAN) : AT ( ADDR(.bss.kstack) - KEXEC_BASE )
- {
+ /* static kernel stack */
+ . = ALIGN(PAGE_GRAN);
PROVIDE(__bsskstack_start = .);
-
*(.bss.kstack)
-
PROVIDE(__bsskstack_end = .);
- }
- __kexec_end = ALIGN(PAGE_GRAN);
+ /* reclaimable data */
+ . = ALIGN(PAGE_GRAN);
+ PROVIDE(__bssreclaim_start = .);
+ *(.bss.reclaim)
+ PROVIDE(__bssreclaim_end = .);
+ } : var
+
+ .__end_of_lunaix :
+ {
+ __kbss_end = ALIGN(PAGE_GRAN) - KEXEC_BASE;
+ __kexec_end = ALIGN(PAGE_GRAN);
+
+#ifdef CONFIG_USE_DEVICETREE
+ __dtb_start = ALIGN(PAGE_GRAN);
+ . = __dtb_start + CONFIG_DTB_MAXSIZE;
+#endif
+
+ __kimg_end = ALIGN(PAGE_GRAN);
+ } : var
}
\ No newline at end of file
gdb_port=1234
default_cmd="console=/dev/ttyS0"
-make CMDLINE=${default_cmd} ARCH=${ARCH} MODE=${MODE:-debug} image -j5 || exit -1
+make CMDLINE=${default_cmd} ARCH=${ARCH} MODE=${MODE:-debug} all -j5 || exit -1
./scripts/qemu.py \
scripts/qemus/qemu_x86_dev.json \
--qemu-dir "${QEMU_DIR}" \
- -v KIMG=build/lunaix.iso \
-v QMPORT=${hmp_port} \
-v GDB_PORT=${gdb_port} \
- -v EXT2_TEST_DISC=machine/test_part.ext2 \
- -v ARCH=${ARCH} &
+ -v ROOTFS=lunaix_rootfs.ext2 \
+ -v ARCH=${ARCH} \
+ -v KBIN=build/bin/kernel.bin \
+ -v "KCMD=${default_cmd} rootfs=/dev/block/sda init=/bin/init" \
+ -- \
+ -nographic &
QMPORT=${hmp_port} gdb build/bin/kernel.bin -ex "target remote localhost:${gdb_port}"
\ No newline at end of file
MODE ?= debug
export ARCH
-DEPS := $(CC) $(LD) $(AR) xorriso grub-mkrescue
+DEPS := $(CC) $(LD) $(AR)
CMDLINE ?= console=/dev/ttyFB0
$(kbuild_dir):
@mkdir -p $(kbin_dir)
@mkdir -p $(os_img_dir)
- @mkdir -p $(os_img_dir)/boot
- @mkdir -p $(os_img_dir)/boot/grub
@mkdir -p $(os_img_dir)/usr
.PHONY: kernel
@$(MAKE) $(MKFLAGS) -I $(mkinc_dir) -f kernel.mk all
+.PHONY: rootfs all clean-user clean tool
+
+tool:
+ $(call status,TASK,$@)
+ @$(MAKE) $(MKFLAGS) -C scripts all -I $(mkinc_dir)
+
.NOTPARALLEL:
-.PHONY: image
export KCMD=$(CMDLINE)
export LBUILD ARCH MODE
-image: $(kbuild_dir) kernel usr/build
- $(call status,TASK,$(notdir $@))
- $(call status,PACK,$(kimg))
+all: $(kbuild_dir) tool kernel usr/build
- @./scripts/grub/config-grub.sh $(os_img_dir)/boot/grub/grub.cfg
- @cp -r usr/build/* $(os_img_dir)/usr
- @cp -r $(kbin_dir)/* $(os_img_dir)/boot
- @grub-mkrescue -o $(kimg) $(os_img_dir)\
- -- -volid "LUNA" -system_id "Lunaix" \
- -report_about FAILURE -abort_on FAILURE
+rootfs: usr/build
+ $(call status,TASK,$(notdir $@))
+ @./scripts/mkrootfs
usr/build: user
$(call status,TASK,$@)
@$(MAKE) $(MKFLAGS) -C usr all -I $(mkinc_dir)
-clean:
+clean-user:
@$(MAKE) -C usr clean -I $(mkinc_dir)
+
+clean:
+ @$(MAKE) $(MKFLAGS) -C usr clean -I $(mkinc_dir)
+ @$(MAKE) $(MKFLAGS) -C scripts clean -I $(mkinc_dir)
@$(MAKE) -f kernel.mk clean -I $(mkinc_dir)
+
@rm -rf $(kbuild_dir) || exit 1
@rm -rf .builder || exit 1
--- /dev/null
+#include <sys/mman.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <sys/fcntl.h>
+#include <unistd.h>
+#include <errno.h>
+#include <string.h>
+
+typedef unsigned long elf64_ptr_t;
+typedef unsigned short elf64_hlf_t;
+typedef unsigned long elf64_off_t;
+typedef int elf64_swd_t;
+typedef unsigned int elf64_wrd_t;
+typedef unsigned long elf64_xwrd_t;
+typedef long elf64_sxwrd_t;
+
+typedef unsigned int elf32_ptr_t;
+typedef unsigned short elf32_hlf_t;
+typedef unsigned int elf32_off_t;
+typedef unsigned int elf32_swd_t;
+typedef unsigned int elf32_wrd_t;
+
+#define ELFCLASS32 1
+#define ELFCLASS64 2
+
+#define PT_LOAD 1
+
+typedef unsigned long ptr_t;
+
+struct elf_generic_ehdr
+{
+ union {
+ struct {
+ unsigned int signature;
+ unsigned char class;
+ } __attribute__((packed));
+ unsigned char e_ident[16];
+ };
+ unsigned short e_type;
+ unsigned short e_machine;
+ unsigned int e_version;
+};
+
+struct elf32_ehdr
+{
+ struct elf_generic_ehdr head;
+ elf32_ptr_t e_entry;
+ elf32_off_t e_phoff;
+ elf32_off_t e_shoff;
+ elf32_wrd_t e_flags;
+ elf32_hlf_t e_ehsize;
+ elf32_hlf_t e_phentsize;
+ elf32_hlf_t e_phnum;
+ elf32_hlf_t e_shentsize;
+ elf32_hlf_t e_shnum;
+ elf32_hlf_t e_shstrndx;
+};
+
+struct elf64_ehdr
+{
+ struct elf_generic_ehdr head;
+ elf64_ptr_t e_entry;
+ elf64_off_t e_phoff;
+ elf64_off_t e_shoff;
+ elf64_wrd_t e_flags;
+ elf64_hlf_t e_ehsize;
+ elf64_hlf_t e_phentsize;
+ elf64_hlf_t e_phnum;
+ elf64_hlf_t e_shentsize;
+ elf64_hlf_t e_shnum;
+ elf64_hlf_t e_shstrndx;
+};
+
+struct elf64_phdr
+{
+ elf64_wrd_t p_type;
+ elf64_wrd_t p_flags;
+ elf64_off_t p_offset;
+ elf64_ptr_t p_va;
+ elf64_ptr_t p_pa;
+ elf64_xwrd_t p_filesz;
+ elf64_xwrd_t p_memsz;
+ elf64_xwrd_t p_align;
+};
+
+struct elf32_phdr
+{
+ elf32_wrd_t p_type;
+ elf32_off_t p_offset;
+ elf32_ptr_t p_va;
+ elf32_ptr_t p_pa;
+ elf32_wrd_t p_filesz;
+ elf32_wrd_t p_memsz;
+ elf32_wrd_t p_flags;
+ elf32_wrd_t p_align;
+};
+
+struct elf_section
+{
+ ptr_t va;
+ ptr_t pa;
+ unsigned int flags;
+ unsigned int memsz;
+};
+
+struct ksec_genctx
+{
+ struct elf_section* secs;
+ int size;
+ const char* prefix;
+};
+
+#define MAPPED_SIZE (256 << 10)
+
+static struct elf_generic_ehdr*
+__load_elf(const char* path)
+{
+ int fd;
+ struct elf_generic_ehdr* ehdr;
+
+ fd = open(path, O_RDONLY);
+ if (fd == -1) {
+ printf("fail to open elf: %s\n", strerror(errno));
+ return NULL;
+ }
+
+ ehdr = mmap(NULL, MAPPED_SIZE, PROT_READ, MAP_SHARED, fd, 0);
+ if ((void*)ehdr == (void*)-1) {
+ printf("fail to mmap elf (%d): %s\n", errno, strerror(errno));
+ return NULL;
+ }
+
+ return ehdr;
+}
+
+static void
+__wr_mapentry(struct ksec_genctx* ctx, struct elf_section* sec)
+{
+ printf("/* --- entry --- */\n");
+ printf("%s 0x%lx\n", ctx->prefix, sec->va);
+ printf("%s 0x%lx\n", ctx->prefix, sec->pa);
+ printf(".4byte 0x%x\n", sec->memsz);
+ printf(".4byte 0x%x\n", sec->flags);
+}
+
+static void
+__wr_maplast(struct ksec_genctx* ctx, struct elf_section* sec)
+{
+ printf("/* --- entry --- */\n");
+ printf("%s 0x%lx\n", ctx->prefix, sec->va);
+ printf("%s 0x%lx\n", ctx->prefix, sec->pa);
+ printf(".4byte (__kexec_end - 0x%lx)\n", sec->va);
+ printf(".4byte 0x%x\n", sec->flags);
+}
+
+#define SIZEPF32 ".4byte"
+#define SIZEPF64 ".8byte"
+#define gen_ksec_map(bits, ctx, ehdr) \
+ ({ \
+ struct elf##bits##_ehdr *_e; \
+ struct elf##bits##_phdr *phdr, *phent; \
+ _e = (struct elf##bits##_ehdr*)(ehdr); \
+ phdr = (struct elf##bits##_phdr*)((ptr_t)_e + _e->e_phoff); \
+ for (int i = 0, j = 0; i < _e->e_phnum; i++) { \
+ phent = &phdr[i]; \
+ if (phent->p_type != PT_LOAD) { \
+ continue; \
+ } \
+ ctx.secs[j++] = (struct elf_section) { \
+ .va = phent->p_va, \
+ .pa = phent->p_pa, \
+ .memsz = phent->p_memsz, \
+ .flags = phent->p_flags, \
+ }; \
+ } \
+ })
+
+#define count_loadable(bits, ehdr) \
+ ({ \
+ struct elf##bits##_ehdr *_e; \
+ struct elf##bits##_phdr *phdr, *phent; \
+ int all_loadable = 0; \
+ _e = (struct elf##bits##_ehdr*)(ehdr); \
+ phdr = (struct elf##bits##_phdr*)((ptr_t)_e + _e->e_phoff); \
+ for (int i = 0; i < _e->e_phnum; i++) { \
+ phent = &phdr[i]; \
+ if (phent->p_type == PT_LOAD) { \
+ all_loadable++; \
+ } \
+ } \
+ all_loadable; \
+ })
+
+static void
+__emit_size(struct ksec_genctx* genctx)
+{
+ ptr_t va;
+ unsigned int size = 0;
+ int n = genctx->size - 1;
+
+ /*
+ first two LOAD are boot text and data.
+ we are calculating the kernel size, so
+ ignore it.
+ */
+ for (int i = 2; i < n; i++)
+ {
+ size += genctx->secs[i].memsz;
+ }
+
+ va = genctx->secs[n].va;
+ printf(".4byte 0x%x + (__kexec_end - 0x%lx)\n", size, va);
+}
+
+static void
+__generate_kernelmap(struct elf_generic_ehdr* ehdr)
+{
+
+ printf(".section .autogen.ksecmap, \"a\", @progbits\n"
+ ".global __autogen_ksecmap\n"
+ "__autogen_ksecmap:\n");
+
+ struct ksec_genctx genctx;
+
+ if (ehdr->class == ELFCLASS32) {
+ genctx.size = count_loadable(32, ehdr);
+ genctx.prefix = SIZEPF32;
+ } else {
+ genctx.size = count_loadable(64, ehdr);
+ genctx.prefix = SIZEPF64;
+ }
+
+ genctx.secs = calloc(genctx.size, sizeof(struct elf_section));
+
+ if (ehdr->class == ELFCLASS32) {
+ gen_ksec_map(32, genctx, ehdr);
+ }
+ else {
+ genctx.size = count_loadable(64, ehdr);
+ gen_ksec_map(64, genctx, ehdr);
+ }
+
+ int i = 0;
+ struct elf_section* sec_ent;
+
+ printf(".4byte 0x%x\n", genctx.size);
+ __emit_size(&genctx);
+
+ /*
+ Lunaix define the last LOAD phdr is variable
+ sized. that is the actual size will not be known
+ until after relink, so we need to emit a special
+ entry and let linker determine the size.
+ (see __wr_maplast)
+ */
+
+ for (; i < genctx.size - 1; i++)
+ {
+ sec_ent = &genctx.secs[i];
+ __wr_mapentry(&genctx, sec_ent);
+ }
+
+ __wr_maplast(&genctx, &genctx.secs[i]);
+}
+
+#define MODE_GETARCH 1
+#define MODE_GENLOAD 2
+#define MODE_ERROR 3
+
+int
+main(int argc, char* const* argv)
+{
+ int c, mode;
+ char *path, *out;
+
+ path = NULL;
+ mode = MODE_GETARCH;
+
+ while ((c = getopt(argc, argv, "i:tph")) != -1)
+ {
+ switch (c)
+ {
+ case 'i':
+ path = optarg;
+ break;
+
+ case 't':
+ mode = MODE_GETARCH;
+ break;
+
+ case 'p':
+ mode = MODE_GENLOAD;
+ break;
+
+ case 'h':
+ printf("usage: elftool -i elf_file -pt [-o out]\n");
+ printf(" -t: get elf type.\n");
+ printf(" -p: generate load sections.\n");
+ exit(1);
+ break;
+
+ default:
+ printf("unknown option: '%c'", optopt);
+ exit(1);
+ break;
+ }
+ }
+
+ if (!path) {
+ printf("must specify an elf.\n");
+ exit(1);
+ }
+
+ struct elf_generic_ehdr* ehdr;
+ ehdr = __load_elf(path);
+
+ if (!ehdr) {
+ return 1;
+ }
+
+ if (ehdr->signature != 0x464c457fU) {
+ printf("not an elf file\n");
+ return 1;
+ }
+
+ if (mode == MODE_GETARCH) {
+ if (ehdr->class == ELFCLASS32) {
+ printf("ELF32\n");
+ }
+ else {
+ printf("ELF64\n");
+ }
+ return 0;
+ }
+
+ if (mode == MODE_GENLOAD) {
+ __generate_kernelmap(ehdr);
+ return 0;
+ }
+
+ return 0;
+}
\ No newline at end of file
if (i + 1) % optn.cols == 0:
pp2.print(''.join(row))
row.clear()
- if (i + 1) % optn.cols != 0:
+ if len(row) > 0:
pp2.print(''.join(row))
pp.printf("(granule: %d, density: %d@4K)", optn.granule, pmem.page_per_granule)
self.mem_distr.clear()
pplist = self._pmem.pplist()
- page_per_granule = self.max_mem_pg / self.__mem_distr_granule
- page_per_granule = math.ceil(page_per_granule)
+ page_per_granule = int(self.max_mem_pg) // self.__mem_distr_granule
remainder = self.max_mem_pg % self.__mem_distr_granule
bucket = 0
non_contig = 0
#!/usr/bin/env bash
+SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
+
sym_types=$1
bin=$2
nm_out=$(nm -nfbsd "$bin")
-# class_info=$(readelf -h "$bin" | grep 'Class:' | awk '{print $2}')
+class_info=$($SCRIPT_DIR/elftool.tool -t -i $bin)
allsyms=($nm_out)
allsyms_len=${#allsyms[@]}
dtype="4byte"
-if [ "$ARCH" == 'x86_64' ]; then
+if [ "$class_info" == 'ELF64' ]; then
dtype="8byte"
fi
syms_len=${#syms_idx[@]}
declare -A assoc_array
-echo '.section .ksymtable, "a", @progbits'
-echo " .global __lunaix_ksymtable"
-echo " __lunaix_ksymtable:"
+echo '.section .autogen.ksymtable, "a", @progbits'
+echo " .global __autogen_ksymtable"
+echo " __autogen_ksymtable:"
echo " .$dtype $syms_len"
echo " .align 8"
+++ /dev/null
-default=0
-timeout=0
-
-menuentry "lunaix" {
- multiboot /boot/kernel.bin $KCMD
-}
\ No newline at end of file
+++ /dev/null
-#!/usr/bin/env bash
-
-SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
-
-cat "${SCRIPT_DIR}/GRUB_TEMPLATE" | envsubst > "$1"
\ No newline at end of file
--- /dev/null
+include lunabuild.mkinc
+include utils.mkinc
+
+CFLAGS := -I$(lbuild_config_h)
+
+SRC := elftool
+OUT := $(addsuffix .tool,$(SRC))
+
+%.tool : %.c
+ $(call status,CC,$<)
+ @$(CC) $< -o $@
+
+.PHONY: all clean
+
+all: $(OUT)
+
+clean:
+ rm -f $(OUT)
\ No newline at end of file
--- /dev/null
+#!/usr/bin/env bash
+
+SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
+WS=$(realpath $SCRIPT_DIR/..)
+USR="${WS}/usr/build"
+
+fs="ext2"
+rootfs="${WS}/lunaix_rootfs.${fs}"
+size_mb=16
+
+if [ ! -d "${USR}" ]; then
+ echo "build the user target first!"
+ exit 1
+fi
+
+prefix=""
+if [ ! "$EUID" -eq 0 ]; then
+ echo "==================="
+ echo " mkrootfs require root privilege to manipulate disk image"
+ echo " you maybe prompted for password"
+ echo "==================="
+ echo
+ prefix="sudo"
+fi
+
+tmp_mnt="$(mktemp -d)"
+
+function cleanup() {
+ echo "an error occured, reverting..."
+
+ for arg in "$@"
+ do
+ case "$arg" in
+ "tmpmnt")
+ echo "revert: ${tmp_mnt}"
+ ${prefix} rm -rf "${tmp_mnt}"
+ ;;
+ "img")
+ echo "revert: ${rootfs}"
+ rm -f "${rootfs}"
+ ;;
+ "mnt")
+ echo "${prefix} rm umount ${tmp_mnt}"
+ ;;
+ esac
+ done
+
+ exit 1
+}
+
+dd if=/dev/zero of="${rootfs}" count=${size_mb} bs=1M \
+ || cleanup tmpmnt
+
+mkfs.${fs} -L lunaix-rootfs -r 0 "${rootfs}" \
+ || cleanup tmpmnt img
+
+${prefix} mount -o loop "${rootfs}" "${tmp_mnt}" \
+ || cleanup tmpmnt img
+
+${prefix} chmod -R o+rwx ${tmp_mnt} \
+ || cleanup tmpmnt img
+
+
+cd "${tmp_mnt}" || cleanup tmpmnt img
+
+${prefix} ${SCRIPT_DIR}/mkrootfs-layout ${tmp_mnt} ${USR}
+
+has_err=$?
+if [ "$has_err" -eq 2 ]; then
+ cleanup mnt tmpmnt img
+fi
+
+sync -f .
+
+cd "${WS}" || cleanup
+
+${prefix} umount "${tmp_mnt}" || cleanup
+
+${prefix} rm -d "${tmp_mnt}" || cleanup
+
+if [ ! "${has_err:-0}" -eq 0 ]; then
+ echo "done, but with error."
+else
+ echo "done"
+fi
+
+exit 0
\ No newline at end of file
--- /dev/null
+#!/usr/bin/env bash
+
+base="$1"
+content="$2"
+
+if [ -z "$base" ]; then
+ echo "please specify the working directory"
+ exit 2
+fi
+
+cd "$base" || exit 2
+
+echo "creating basic layout."
+
+mkdir bin dev sys task mnt lib usr \
+ || has_err=1
+
+if [ -n "${content}" ]; then
+ echo "copying contents"
+
+ cp -R "${content}"/* .
+else
+ echo "Note: no content is specified, only basic layout is created"\
+ "You may need to add them later"
+ has_err=1
+fi
+
+echo "ownership set to root:root"
+
+chown -R root:root . \
+ || has_err=1
+
+exit "${has_err:-0}"
\ No newline at end of file
return cmds
def get_qemu_general_opts(self):
- return [
+ opts = [
"-m", get_config(self._opt, "memory", required=True),
"-smp", str(get_config(self._opt, "ncpu", default=1))
]
+ kopts = get_config(self._opt, "kernel")
+ if kopts:
+ opts += [
+ "-kernel", get_config(kopts, "bin", required=True),
+ "-append", get_config(kopts, "cmd", required=True)
+ ]
+
+ dtb = get_config(kopts, "dtb")
+ if dtb:
+ opts += [ "-dtb", dtb ]
+
+ return opts
+
def add_peripheral(self, peripheral):
self._devices.append(peripheral)
opts.update(json.loads(f.read()))
for kv in arg_opt.values:
- [k, v] = kv.split('=')
+ splits = kv.split('=')
+ k, v = splits[0], "=".join(splits[1:])
g_lookup[k] = v
arch = get_config(opts, "arch")
"apic"
]
},
+ "kernel": {
+ "bin": "$KBIN",
+ "cmd": "$KCMD"
+ },
"debug": {
"gdb_port": "$GDB_PORT",
"traced": [
"class": "ahci",
"name": "ahci_0",
"disks": [
- {
- "type": "ide-cd",
- "img": "$KIMG",
- "ro": true,
- "format": "raw"
- },
{
"type": "ide-hd",
- "img": "$EXT2_TEST_DISC",
+ "img": "$ROOTFS",
"format": "raw"
}
]
return 0;
}
-const char* sh_argv[] = { "/usr/bin/sh", 0 };
+const char* sh_argv[] = { "/bin/sh", 0 };
const char* sh_envp[] = { 0 };
int
main(int argc, const char** argv)
{
- mkdir("/dev");
- mkdir("/sys");
- mkdir("/task");
- mkdir("/mnt/disk");
-
must_mount(NULL, "/dev", "devfs", 0);
must_mount(NULL, "/sys", "twifs", MNT_RO);
must_mount(NULL, "/task", "taskfs", MNT_RO);
- maybe_mount("/dev/block/sdb", "/mnt/disk", "ext2", 0);
int fd = check(open("/dev/tty", 0));
check(dup(fd));
- check(symlink("/usr", "/mnt/lunaix-os/usr"));
-
pid_t pid;
int err = 0;
if (!(pid = fork())) {
movl 4(%esp), %ebx
pushl %ebx
- call *(%eax)
+ calll *%eax
movl %eax, %ebx
movl $__SYSCALL_th_exit, %eax
movq (%rsp), %rax
movq 8(%rsp), %rdi
- callq %rax
+ callq *%rax
movq %rax, %rbx
movq $__SYSCALL_th_exit, %rax
obj_files := $(addsuffix .o, $(_LBUILD_SRCS))
build_lib := $(BUILD_DIR)/lib
-build_include := $(BUILD_DIR)/includes
libc_include_opt = $(addprefix -I, $(libc_include))
global_include_opt = $(addprefix -I, $(INCLUDES) $(_LBUILD_INCS))
check_folders := $(src_dirs)
-check_folders += $(build_lib) $(build_include)
+check_folders += $(build_lib) $(LIBC_INCLUDE)
$(BUILD_DIR):
@mkdir -p bin
headers: $(libc_include)
@$(call status_,INSTALL,$(<F))
- @cp -r $(libc_include)/* $(build_include)/
+ @cp -r $(libc_include)/* $(LIBC_INCLUDE)/
all: $(addsuffix .check, $(check_folders)) $(build_lib)/$(BUILD_NAME) headers
@cp arch/$(ARCH)/crt0.S.o $(build_lib)/crt0.o
\ No newline at end of file
libc := $(addprefix $(build_dir)/lib/,$(libc_files))
common_param := CC AR INCLUDES BUILD_DIR BUILD_NAME\
- CFLAGS LDFLAGS ARCH LBUILD
+ CFLAGS LDFLAGS ARCH LBUILD LIBC_INCLUDE
INCLUDES := $(sys_include)
BUILD_DIR := $(build_dir)
BUILD_NAME := $(libc_name).a
+LIBC_INCLUDE := $(build_dir)/usr/includes
+
mkapp-list := $(addprefix app-, $(shell cat apps.list))
mkexec-list := $(addprefix $(build_dir)/bin/, $(_LBUILD_SRCS))
$(build_dir)/lib:
@mkdir -p $(build_dir)/lib
-$(build_dir)/includes:
- @mkdir -p $(build_dir)/includes
+$(LIBC_INCLUDE):
+ @mkdir -p $(LIBC_INCLUDE)
# LibC
export $(common_param)
-$(build_dir)/$(libc_name).a: $(build_dir)/bin $(build_dir)/lib $(build_dir)/includes
+$(build_dir)/$(libc_name).a: $(build_dir)/bin \
+ $(build_dir)/lib $(LIBC_INCLUDE)
$(call status,TASK,$(BUILD_NAME))
@$(MAKE) $(MKFLAGS) -C libc/ $(task)
@$(MAKE) $(MKFLAGS) -C $* $(task) BUILD_NAME="$*"
app: task := all
-app: INCLUDES += $(build_dir)/includes
+app: INCLUDES += $(LIBC_INCLUDE)
app: $(mkapp-list)
@$(CC) -T $(uexec_ld) -o $@ $< $(libc) $(LDFLAGS)
exec: task := all
-exec: INCLUDES += $(build_dir)/includes
+exec: INCLUDES += $(LIBC_INCLUDE)
exec: $(mkexec-list)
}
char buffer[1024];
- strcpy(buffer, "/usr/bin/");
- strcpy(&buffer[9], name);
+ strcpy(buffer, "/bin/");
+ strcpy(&buffer[5], name);
pid_t p;
int res;