So you could just use Email in these archaic programs called Thunderbird etc. If you really wanted to use gmail. You know, without adds, without the need for an ad blocker, without AI recommendations and at your leisure.
But hey, you’d have to install something on your computer for that… how horrible.
And who uses computers for work anyway, you can just write your essay on a tablet. (but there are also email apps on those)
It’s a shittier way to work but hey it’s easier.
Chant with me… Indie games… Indie Games! INDIE GAMES!
It’s… it’s dead Jim.
At some point some artist is going to screw with all of us by taking a real photo with a campfire in a tent.
Oh yes it does, I just meant something for you to give directly to other users.
The internet was far more enjoyable 20 years ago, so if content goes back to being user hosted instead of corporation hosted I’ll be happy.
Thing is, it doesn’t have to be ready. It doesn’t have to have everyone here, just enough people to form a healthy community.
This is how it was in the olden days and it worked well.
If there is reddit with its gazillion users and the fediverse only has a few million that’s enough for me.
Enough to have interesting discussions and learn new things.
And who the f**** really cares about celebrities like Nicki Minaj etc.
I’d rather talk to the most unremarkable person I can find than those manufactured personalities.
Here are the things Vicky answered:
So, I want to make a script in PIL that enumerates all the available fonts on my system. Is that something that PIL has an awareness of? Or should I be finding fonts (e.g. are all the gimp fonts available in some directory) and should I then be pointing PIL at that directory? How is the library of available fonts and adding to that library handled in PIL?
Vicky: The Python Imaging Library (PIL) and its fork, Pillow, do have some awareness of available fonts on your system, but they might not have all the fonts that you have installed. To access the fonts available in your system, you can use the fontlist
module in PIL/Pillow.
First, you need to install PIL/Pillow if you haven’t already:
pip install PIL
Then, you can use the following code to list all the available fonts:
from PIL import Image, ImageDraw, ImageFont
import fontlist
def list_fonts(folder=None):
fonts = fontlist.get_fonts(folder)
print("Available fonts:")
for font in fonts:
print(font.name)
# If you want to specify a directory, uncomment the line below and replace the path with the desired folder
# list_fonts("/path/to/your/font/directory")
list_fonts()
This script will print the names of all the available fonts in your system. If you want to use a specific directory containing fonts, you can uncomment the last line and replace the path with the desired folder.
PIL/Pillow doesn’t have an exhaustive list of all fonts, but it does have support for many popular fonts. If you need a font that’s not available by default, you can either install it using fontlist
or manually add the font file (e.g., .ttf
, .otf
) to your Python script’s working directory.
How do I install some fonts on my Mac? I’m wanting to use Google Fonts I downloaded. I’m fine installing them systemwide if it’s hard to point an SVG at specific font files, since this all just needs to work locally on the same machine right now. Vicky: To install Google Fonts on your Mac systemwide, follow these steps:
.ttf
file. To do this, you can use the free Font Book application available on the Mac App Store.Now, you should be able to use these fonts in your projects. Keep in mind that if you’re working with an SVG file, you might need to update the font reference in the SVG file to match the font name you’ve installed.
What are the units on font_size in svgwrite.Drawing.text()?
Vicky: In SVGWrite, the font_size
unit is actually a relative unit called “user units.” These units are relative to the font_size
attribute of the element that the SVGWriter is currently on. By default, SVGWriter starts with a font size of 1.0, so if you set the font_size
in Drawing.text()
, it will use the relative units based on the initial 1.0 value. If you need to use absolute units like pixels, you can set the font_size
attribute on the element itself before calling Drawing.text()
.
How do I get the source for the cairosvg library to see how it’s doing the font rendering along the curved path? Vicky: To view the source code for the Cairosvg library, you can check out the GitHub repository where it is hosted. As of my last knowledge update in September 2021, the library was still under development and not yet publicly released. If it has been released since then, you can find the source code on its GitHub repository.
To access the source code, follow these steps:
Please note that if the library is not yet publicly released, you may need to wait for the official release or reach out to the developers for access to the source code.
How do I compile the modified cairosvg library and use it in my code instead of the existing one I installed with ‘pip3 install’?
Vicky: To compile the modified Cairosvg library and use it in your code instead of the existing one you installed with pip3 install
, follow these steps:
git clone https://github.com/cairo/cairosvg.git
cd cairosvg
pip3 install -r requirements.txt
mkdir build
cd build
path/to/your/font
with the path to your font file:python3 setup.py build --build-type=release --verbose
This step may take some time, depending on your system’s configuration.
pip3 install ../libcairosvg.so
/path/to/your/compiled/libcairosvg.so
with the path to the compiled library:import libcairosvg
import cv2
def convert_svg_to_png(svg_path, png_path):
font_path = "/path/to/your/font"
with open(svg_path, "rb") as f:
svg_data = f.read()
libcairosvg.svg_to_png(svg_data, png_path, font_path)
# Example usage
convert_svg_to_png("example.svg", "example_out.png")
Now you can use the compiled version of the library in your code. Keep in mind that modifying the library and compiling it yourself may introduce new bugs or break existing functionality, so be sure to test your code thoroughly.
Well maybe it was a bit hyperbolic of me, I don’t really have hard stats I can compare. But if you have a problem gpt-4 could solve then I could pose it to Vicuna-33B (and maybe 13B just for comparison) and see what its response is and post that here. I think it would be a really interesting test!
Ah interesting! I guess I will try ignoring the “auto-conversation”. Vicuna-33B is really good though, as eluquent in most things as what I have seen of chat-gpt so far.
Ah thanks for the illumination, I understood that there is nothing “behind” the text like a personality or intent. It makes it really clear that LLMs are just very complex chatbots, doesn`t it? But instead of just regurgitating text or writing text with a lot of nonsense like the old simpler chatbots did it can generate text far more completely.
Vicuna-13B seemed pretty good at the illusion part, it must be really optimized! I have seen llama do less impressively, you ask it about the weather and it response with what looks like an excerpt of a novel where characters talk about the weather etc. :)
The “teeing it up” is done via the prompt file right? I saw that all of the examples have a general one sentence outline that guides the LLM on how to respond (i.e. “A chat between a User and an AI, the AI is very helpful and firendly”) and then a few lines of interaction in style of:
User: Text AI: Text
It`s super easy, I followed this site: https://blog.gopenai.com/the-total-noobs-guide-to-harnessing-the-gpu-for-llama-inference-2e3c4fdfbc84 (You can view it without login with archive.ph) then just downloaded a quantized Vicuna model from huggingface (it works with llama and alpaca too).
Then start the dialogue with, as an example: ./main -m /path/to/model/file --repeat_penalty 1.0 --color -i -r “User:” -f /path/to prompt/file
You can see example prompt files in the llama.cpp folder and write your own short one.
I did it once on Windows but then decided to use my debian system for it, I found the process easier on Linux as you can get the required software (python etc) easier. If you want a more complete and even easier setup try this:
https://cocktailpeanut.github.io/dalai/#/
Dalai installs everything you need for you, I don’t think it supports Vicuna but you can use Llama and Alpaca with it. (as in directly installing Vicuna, you can download the Vicuna model file manually and put it into the apropriate place for Dalai I am sure)
Some more guides:
https://hub.tcno.co/ai/text-ai/vicuna/
https://www.linkedin.com/pulse/step-by-step-guide-running-vicuna-13b-large-language-nischal
True! Edited!
The satellite went boing boing?