Skip to content

Getting Started

"casvp" is the abbreviation for "CASLab Virtual Platform", but I believe it deserves a better name.

casvp provides the capability of constructing a electronic system level (ESL) platform with a Lua-based scripting interface.

Basic concepts

Directory structure

  • ext: External sources (git submodules or other library codes)
  • lua: All Lua sources to be run by casvp
    • library: All Lua API definitions provided by casvp (Used by LuaLS)
  • src: SystemC/C++ sources
    • bindings: Sources that expose SystemC/C++ APIs to Lua
    • utils: Utilities to be used internally in SystemC/C++
  • tests: Test sources
    • integration: Integration tests
      • opencl: OpenCL integration tests
    • unit: Unit tests

Namespaces

In casvp, exposed SystemC/C++ APIs are grouped by "namespaces". For example, simtix-related modules/APIs are placed under simtix namespace. Files related to the simtix namespace are placed here:

  • lua/simtix/*: simtix related Lua scripts
  • lua/library/simtix/*: simtix related Lua API definitions
  • src/bindings/simtix/*.cc: C++ sources for simtix related bindings
  • tests/integration/simtix.lua: Integration tests for simtix
  • tests/unit/simtix/*: Unit tests for simtix

The Lua API regarding simtix namespace are grouped in the global simtix table:

local sm = simtix.PipelinedSM("sm0", 0) -- simtix's PipelinedSM model
local cache = simtix.Cache("cache") -- simtix's Cache model

See all defined namespaces in casvp.yaml.

casvp as Lua interpreter

You may think of casvp as a domain-specific Lua interpreter. casvp uses LuaJIT as a scripting middleware to provide blazing fast executions. casvp can run ordinary Lua program (fib.lua):

-- fib.lua
local function fib(n)
    if n < 2 then
        return n
    else
        return fib(n - 1) + fib(n - 2)
    end
end
print(fib(40))

Then execute fib.lua with casvp:

./build/bin/casvp fib.lua

And you will get 102334155. (You can compare the speed with ordinary Lua51 interpreter)

To construct an ESL system simulation using casvp, you need to write "bindings" (src/bindings) so that SystemC modules are exposed to the scripting environment. casvp can then utilize the Lua bindings to call SystemC/C++ API.

For instance the following code utilizes the sc and simple bindings to setup a very basic ESL simulation. The script can only be executed by casvp but not ordinary Lua interpreter, since ordinary Lua interpreter knows nothing about sc and simple bindings.

local period = sc.time(10, sc.time_unit.NS) -- binding of `sc_time`
local clk = sc.clock("clock", period) -- binding of `sc_clock`
local initiator = simple.Initiator("i") -- binding of custom module `Initiator`
local memory = simple.Memory("m", { -- binding of custom module `Memory`
    size = 1024
})
-- Connecting all components to build the system
initiator.clock = clk
initiator.target = memory.port -- bind initiator's socket to memory's
memory.clock = clk
sc.start(10 * period) -- run 10 cycles
print(sc.time_stamp()) -- show time stamp after running for 10 cycles

Describing your own system with casvp

Currently, we don't have detailed documentation about available bindings. However, you can utilize LuaLS to help find all available APIs.

For VSCode users, you can install the Lua language support directly from the marketplace. With the language server, tab auto completions will be enabled so that you can find all available APIs.

For Neovim users, you know how to do it by yourself.

To see examples of Lua scripts that constructs ESL systems, see *.lua under lua/ or tests/.

Development with casvp

Writing bindings

Lua bindings are built with the excellent sol2 library. Please refer to the docs for detailed API usages. In casvp, we simplified the process by declaring the LUA_CTOR of your bindings. For example, the simple.Memory bindings are declared here:

LUA_CTOR(simple, Memory) {
  auto memory_type = simple.new_usertype<Memory>(
      "Memory", sol::call_constructor,
      sol::factories([](const char *name, const sol::table &param) {
        int size = param.get_or("size", 1024);
        unsigned latency = param.get_or("latency", 10);
        unsigned fifo_size = param.get_or("fifo_size", 1);
        return std::make_shared<Memory>(name, size, latency, fifo_size);
      }));
  memory_type["port"] = &Memory::port_;
  memory_type["axi_port"] = sol::property(&Memory::axi_port);
  memory_type["size"] = sol::property(&Memory::size);
  memory_type["clock"] = sol::property(&Memory::clock, &Memory::set_clock);
  memory_type["read_bytes"] = &Memory::read_bytes;
  memory_type["write_bytes"] = &Memory::write_bytes;
}

In the above code, first you will notice that the LUA_CTOR macro accepts 2 arguments: simple and Memory. The first argument is the namespace of your binding, while the second one is an arbitrary name, which is just used to distinguish LUA_CTORs if there exists serverl ones in the source code.

In LUA_CTOR, you can then use the specified namespace to create new usertype. Then we can specify all the properties of the type, such as port or clock in the example.

What LUA_CTOR magic actually does is to run the code you write in the parentheses before main. All LUA_CTOR codes are executed before main and we call this process "runtime initialization", meaning that we are initializing the Lua runtime before we actually execute the scripts.

Note:

In the new_usertype definition, we use sol::call_constructor so that calling Memory() returns the Memory module.

sol::factories accepts multiple lambdas that creates the instance. The signature of the lambdas is how you call the constructor in Lua. Therefore, you can overload the constructor by specifying multiple lambdas.

Note that returning shared pointer (std::shared_ptr) is recommended so that we share the ownership of the instance with sol2. This prevent the garbage collection mechanism in Lua from killing the object. (Pointers with zero references would be deallocated.) Not

Port bindings

In casvp, binded modules usually utilize the TLM 2.0 interface for interconnection. What you used to do in SystemC should look like this:

// C++
initiator.simple_initiator_socket.bind(memory.simple_target_socket);

And in casvp, we do:

-- Lua
initiator.target = memory.port

The magic here is that

  1. The memory exposes its simple_target_socket via the port property binding.
  2. The = in the script is actually calling the setter function of initiator's target property binding.

This way, we are actually running a small piece of C++ code, where the initiator and target sockets are binded together, while it looks like a simple assignment in the Lua script.

Note:

To account for various types of target socket (TLM sockets are templated types, and the type of different module's target socket would differ), we use the base_target_socket_type in the setter function:

using Target =
      tlm_utils::simple_initiator_socket<Initiator>::base_target_socket_type;

And sol2 will handle the polymophism for us, as the connectable socket types should be derived from base_target_socket_type.

Binding simtix modules

simtix utilizes the global tick entry function simtix::sim::Tick() to drive all components 1 cycle forward. Therefore, we manage the tick task in a singleton class TickManager. All simtix bindings should attach their clock to the TickManager via AttachClock to make sure the clock signal triggers the invokation of simtix::sim::Tick. The only SC_METHOD in TickManager is to call simtix::sim::Tick on every posedge event of the attached clock.

simtix utilizes MemoryInterface for connecting with memory modules. To adapt to TLM interface, we have two adaptors: FromTlm and ToTlm. Native simtix module can send read/write requests to ToTlm module, which would then forward the requests as a TLM transaction. On the other hand, TLM transactions can be received by FromTlm and be forwarded to a native simtix MemoryInterface module.

For example, the simtix Cache utilizes both ToTlm and FromTlm adaptor (core_side_ and mem_side_ respectively).

ToTlm *mem_side() {
  if (!mem_side_) {
    mem_side_ = std::make_unique<ToTlm>("mem_side");
    if (clock_) {
      mem_side_->set_clock(clock_);
    }
    Cache::AttachNextLevel(mem_side_.get());
  }
  return mem_side_.get();
}

FromTlm *core_side() {
  if (!core_side_) {
    core_side_ = std::make_unique<FromTlm>("core_side_", this);
    if (clock_) {
      core_side_->set_clock(clock_);
    }
  }
  return core_side_.get();
}

Note that these two adaptors are "lazy-initialized", as in some cases, the cache is not connected to the TLM interfaces directly, but acquired by a native simtix module, such as a PipelinedSM.

void set_icache(std::shared_ptr<mem::CacheInterface> icache) {
  icache_ = icache;
  PipelinedSM::AttachICache(icache_);
}

In this case, PipelinedSM::AttachICache is a native simtix method of connecting a cache to the SM. This maps to the Lua binding like:

local sm = simtix.PipelinedSM("sm0", 0)
sm.icache = simtix.Cache("icache0") -- This binds to `set_icache` method above

Inter-process communication (IPC)

casvp includes two modules dedicated for handling IPC: DbgAgent and TimingAgent. Both modules inherits from BaseAgent. The agent modules are modules that could used in the casvp system. The agents handles the IPC requests through the class Server, which utilize Unix domain socket to communicate. The Server has two queues:

  1. fw_queue: stores request from the clients to the server
  2. bw_queue: stores responses from the server to the client

Note:

  • Question: Why Agent and Server?
  • Ans: Server is like a web server that serves incoming requests sent via libcomm, handling all IPC-related tasks, while the Agent is a proxy that forwards the received packet to the ESL system using the TLM transport method.

The agents will:

  1. Check fw_queue
  2. If fw_queue is not empty, split or pad the original requst from server to 64-byte aligned trunk TLM transactions
  3. Push the trunk transactions into payload_queue
  4. The agents will check the payload_queue emptiness and send the transactions through either transport_dbg or nb_transport_fw
  5. When receiving trunk responses from nb_transport_bw, the agent will start aggregating responses. The aggregation inclues
    • Set the trunk ack flag to true
    • If it is a read request, store the responed data to msg_info_map_[msgid].data, which is a vector that has the same size as the original request
  6. When all trunks are received, insert the response to bw_queue

libcomm

When using IPC to communicate, we use a C library libcomm to ensure the message format is aligned between server and clients. libcomm contains two public header files:

  1. msg.h: defines the message format, avaiable OPs and message manipulation methods
    • Available OPs: TERMINATE, PROBE, SIGNAL_REGISTER, WRITE, READ, ACK, ERROR
    • A msg_t struct contains header and payload, where header is fixed format and paylaod is a byte array
  2. comm.h: defines the IPC communication methods
    • During communication, we will send a header that contains OP to show our intention
    • For a read request
      1. Send read header
      2. Receive ack header
      3. Receive read payload (data)
    • For a write request
      1. Send write header
      2. Send write payload (data)
      3. Receive ack header
    • The read request and write request communication are encapsulated in methods ipc_send_read_msg and ipc_send_write_msg

Memory managers (MM)

Atlm_mm_interface is required in the TLM generic payload's constructor. Currently we have two MM implementations:

  • Null: Don't give a shit but just deleting the payload when freeing one.
  • Simple: Implementing object pool pattern that recycles freed payload.

Usage:

// Using Null MM
auto *p1 = new tlm::tlm_generic_payload(mm::Null);
p1->release();  // deleted by Null MM

// Using Simple MM
auto *p2 = mm::Simple::GetInstance().Allocate();  // Simple MM is a singleton
p2->release();  // recycle back to the pool

Unit tests and integration tests

Unit tests and integration tests are written in Lua. Unit tests are to test the correctness of the binding, while integration tests are to exercise the bindings, checking whether they behave correctly when interacting with each other.

Currently, we don't utilize any existing framework for writing tests in Lua. Instead, we simply use the builtin assertion to test for expected result.

Using the systemc/time.lua for example:

local t0 = sc.ZERO_TIME

for _ = 1, 100 do
  -- Randomly generate a step time
  local step = sc.time(math.random(), sc.time_unit.NS)

  -- Run the simulation for the step time
  sc.start(step)
  print(sc.time_stamp())

  assert(t0 + step == sc.time_stamp(), "Current time must be t0 + step time")

  -- Update t0
  t0 = t0 + step
end

print("Pass!")

In this unit test, we test:

  1. Whether the sc.time binding correctly returns sc_time objects.
  2. Whether the sc.start binding actually starts the simulation with a given time.
  3. Whether we can output current time stamp via sc.time_stamp binding.
  4. Whether the operator + overloading for sc.times works correctly.

Conventionally, we print("Pass!") at the end of the script to indicate that the test successfully finishes.

To add a Lua script to the testsuite, use the CMake function add_casvp_test_script, where KIND can be integration and LUA_SCRIPT is the script to run for testing. For easier usage, we define the macro:

Then with cmake -B build -D ENABLE_TESTING=1, you can do ctest --test-dir build to run all tests.

Logs

Currently, prettified log can be dumped via functions defined in src/utils/output.h. The functions utilizes the {fmt} library for formatting the output. Please refer to the docs for detailed usage.

The logs are categorized in 3 types:

  • Info
  • Warning
  • Fatal

Both Info and Warning won't affect the simulation, while Fatal thows a fatal_error exception that terminates casvp directly.