Eerste 9 omscholers ronden Make IT Work van HAN af [Computable]
Negen deelnemers aan het traject Make IT Work van de Hogeschool van Arnhem en Nijmegen (HAN) ontvingen gisteren hun certificaat. Daarmee vormen zij de eerste lichting studenten van de HAN die met succes een omscholingstraject heeft...
Fransie Becker country manager Signpost Nederland [Computable]
Fransie Becker wordt de nieuwe country manager van Signpost in Nederland. Hij moet het bedrijf tegen volgend jaar doen groeien naar een twintigtal medewerkers.
Capgemini maakt Airbus ‘cloud-first’ [Computable]
Airbus kiest Capgemini om een cloud-first-transformatieprogramma te leveren voor de wereldwijde activiteiten van Commercial Aircraft and Helicopters. Als strategisch partner van Airbus zal Capgemini nu een volledig beheerde dienst van de kern-cloud-infrastructuur voor deze Airbus-activiteiten leveren.
Bedrijfsspionage bij Appian kost Pegasystems 2 miljard [Computable]
Pegasystems heeft naar het inzicht van een Amerikaanse jury bedrijfsspionage gepleegd bij Appian en moet ruim twee miljard dollar betalen. De plicht tot betaling is pas definitief als alle mogelijke beroepszaken zijn afgelopen.
Leidraad voor online-platforms in de maak [Computable]
De Autoriteit Consument en Markt (ACM) bereidt een leidraad voor die aangeeft welke informatie online-platforms moeten geven aan ondernemers die via deze platforms spullen of diensten verkopen aan consumenten.
Hyperion Lab strikt 9 ai- en hpc-startups [Computable]
Hyperion Lab, het innovatielab in Amsterdam Zuidoost, meldt negen nieuwe startups die zullen deelnemen aan de Hyperion Lab Showcase Program. Het programma van zes maanden is erop gericht Europese innovaties op het gebied van kunstmatige intelligentie...
Inkscape 1.2 is Now Available to Download [OMG! Ubuntu!]
Ahoy, a new version of Inkscape has appeared. We take a look at the key new features in Inkscape 1.2, the official release video, and share download links.
This post, Inkscape 1.2 is Now Available to Download is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.
Scrivano is a New App to Take Handwritten Notes on Linux [OMG! Ubuntu!]
Scrivano is a new handwritten notes app for Linux. We look at Scrivano's features, which include a few terrific time-saving tools, and how to install it.
This post, Scrivano is a New App to Take Handwritten Notes on Linux is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.
Inkscape 1.2 Released with Support for Multi-Page Documents, Numerous Enhancements [Linux Today]
Coming almost a year after Inkscape 1.1, the Inkscape 1.2 release is here to introduce a new Page tool that implements support for multiple pages in Inkscape documents. To access the new Page tool, click on the lowest button in the toolbar. The tool also lets you import and export multi-page PDF documents.
Also new in Inkscape 1.2 is a ‘Tiling’ Live Path Effect (LPE) that allows for interactive tiling, the ability to import SVG images from Open Clipart, Wikimedia Commons, and other online sources, on-canvas alignment snapping, as well as the ability to edit markers and dash patterns.
The post Inkscape 1.2 Released with Support for Multi-Page Documents, Numerous Enhancements appeared first on Linux Today.
How to Install Nginx, MariaDB, and PHP (LEMP) on Ubuntu 22.04 LTS [Linux Today]
LEMP is an acronym for a group of free and open-source software often used to serve web applications. It represents the configuration of Nginx Web Server, MySQL / MariaDB Database, and PHP Scripting Language on a Linux operating system.
This guide shows you step-by-step the installation process of the LEMP stack, Nginx, MariaDB, and PHP, in Ubuntu 22.04 LTS.
The post How to Install Nginx, MariaDB, and PHP (LEMP) on Ubuntu 22.04 LTS appeared first on Linux Today.
Alt Workstation K 10.0 Released [Linux Today]
The published release of the distribution kit “Alt Workstation K 10” is supplied with a graphical environment based on KDE Plasma. Its boot images are prepared for x86_64 architectures like HTTP, Yandex Mirror, Distrib Coffee, Infania Networks.
The post Alt Workstation K 10.0 Released appeared first on Linux Today.
How to Use Sed in Linux for Basic Shell Tasks [Linux Today]
Sed is a simple program. It does not create nor edit any files. Despite that, it is a powerful utility that can make your Linux life easier.
The post How to Use Sed in Linux for Basic Shell Tasks appeared first on Linux Today.
9to5Linux Weekly Roundup: May 15th, 2022 [Linux Today]
The week was really great for Linux news and releases. We got huge news from NVIDIA as they finally decided to open-source their graphics drivers, we got a new Fedora Linux release for you to play with on your PC, and we got a new generation of the Kubuntu Focus M2 Linux laptop with upgraded internals.
On top of that, I take a look at Fedora Media Writer 5.0, notify you about the upcoming end-of-life of Ubuntu 21.10 and LibreOffice 7.2, and give you the hands up about the latest distro and software releases. You can enjoy these and much more in 9to5Linux’s Linux Weekly Roundup for May 15th, 2022, below!
The post 9to5Linux Weekly Roundup: May 15th, 2022 appeared first on Linux Today.
How to Build and Install a Custom Kernel on Ubuntu [Linux Today]
Compiling your own custom Linux kernel allows you to extract the most of your hardware and software. Learn how to install one in Ubuntu today.
The post How to Build and Install a Custom Kernel on Ubuntu appeared first on Linux Today.
Top 10 Best Linux Distributions in 2022 For Everyone [Linux Today]
A list of best Linux Distributions in 2022 for every user – students, creators, developers and casual users with guidance to pick one.
The post Top 10 Best Linux Distributions in 2022 For Everyone appeared first on Linux Today.
NetworkManager 1.38 Released with IPv6, Other Improvements [Linux Today]
The NetworkManager 1.38 release is here to further improve IPv6 support and other key features. Learn more here.
The post NetworkManager 1.38 Released with IPv6, Other Improvements appeared first on Linux Today.
5 Tools to Easily Create a Custom Linux Distro [Linux Today]
If you want a Linux desktop that is tailored to your needs, your best option is to create a custom Linux distro. Here’s how you can do it.
The post 5 Tools to Easily Create a Custom Linux Distro appeared first on Linux Today.
Fedora 35 v Fedora 36: What’s the Difference? [Linux Today]
The post Fedora 35 v Fedora 36: What’s the Difference? appeared first on Linux Today.
Test and Code: 188: Python's Rich, Textual, and Textualize - Innovating the CLI [Planet Python]
Will McGugan has brought a lot of color to CLIs within Python due to Rich.
Then Textual started rethinking full command line applications, including layout with CSS.
And now Textualize, a new startup, is bringing CLI apps to the web.
Special Guest: Will McGugan.
Sponsored By:
Links:
<p>Will McGugan has brought a lot of color to CLIs within Python due to Rich. <br> Then Textual started rethinking full command line applications, including layout with CSS.<br> And now Textualize, a new startup, is bringing CLI apps to the web.</p><p>Special Guest: Will McGugan.</p><p>Sponsored By:</p><ul><li><a href="http://rollbar.com/testandcode" rel="nofollow">Rollbar</a>: <a href="http://rollbar.com/testandcode" rel="nofollow">With Rollbar, developers deploy better software faster.</a></li></ul><p>Links:</p><ul><li><a href="https://github.com/Textualize/rich" title="rich" rel="nofollow">rich</a></li><li><a href="https://github.com/Textualize/rich-cli" title="rich-cli" rel="nofollow">rich-cli</a></li><li><a href="https://github.com/Textualize/textual" title="textual" rel="nofollow">textual</a></li><li><a href="https://www.textualize.io/" title="Textualize.io" rel="nofollow">Textualize.io</a></li><li><a href="https://www.textualize.io/rich/gallery" title="Rich Gallery" rel="nofollow">Rich Gallery</a></li><li><a href="https://www.textualize.io/textual/gallery" title="Textualize Gallery" rel="nofollow">Textualize Gallery</a></li><li><a href="https://pythonbytes.fm/" title="Python Bytes Podcast" rel="nofollow">Python Bytes Podcast</a></li></ul>Hynek Schlawack: Better Python Object Serialization [Planet Python]
The Python standard library is full of underappreciated gems. One of them allows for simple and elegant function dispatching based on argument types. This makes it perfect for serialization of arbitrary objects – for example to JSON in web APIs and structured logs.
Andre Roberge: Python 🐍 fun with emojis [Planet Python]
At EuroSciPy in 2018, Marc Garcia gave a lightning talk which started by pointing out that scientific Python programmers like to alias everything, such as
import numpy as np
import pandas as pd
and suggested that they perhaps would prefer to use emojis, such as
import pandas as 🐼
However, Python does not support emojis as code, so the above line cannot be used.
A year prior, Thomas A Caswell had created a pull request for CPython that would have made this possible. This code would have allowed the use of emojis in all environments, including in a Python REPL and even in Jupyter notebooks. Unsurprisingly, this was rejected.
Undeterred, Geir Arne Hjelle created a project called pythonji (available on Pypi) which enabled the use of emojis in Python code, but in a much more restricted way. With pythonji, one can run modules ending with 🐍 instead of .py from a terminal. However, such modules cannot be imported, nor can emojis be used in a terminal.
When I learned about this attempt by Geir Arne Hjelle from a tweet by Mike Driscoll, I thought it would be a fun little project to implement with ideas. Below, I use the same basic example included in the original pythonji project.
And, it works in Jupyter notebooks too!
😉
A. Jesse Jiryu Davis: Why Should Async Get All The Love?: Advanced Control Flow With Threads [Planet Python]
I spoke at PyCon 2022 about writing safe, elegant concurrent Python with threads. The video is coming soon; here’s a written version of the talk. asyncio. Asyncio is really hip. And not just asyncio—the older async frameworks like Twisted and Tornado, and more recent ones like Trio and Curio are hip, too. I think they deserve to be! I’m a big fan. I spent a lot of time contributing to Tornado and asyncio some years ago.
PyCon: PyCon US 2022 Recordings Update [Planet Python]
We understand that the PyCon US recordings are an incredibly important resource to the community. We were looking forward to providing the PyCon US 2022 recordings very soon after the event – especially since we know many of you weren’t able to attend this year’s conference in person. Regrettably, we have encountered some technical obstacles this year. We are working with our AV partners at the venue to resolve things as soon as possible.
Because of the ongoing pandemic, we were unable to work with our usual vendor for PyCon US conferences. They are based in Canada and understandably didn’t want to commit to travel to the US this year. This resulted in PyCon US contracting with a new AV vendor for the first time in many years. We were very thorough in providing details, but ultimately this was a new team doing work to new specifications.
The onsite AV team has provided an update on the technical issues as follows: “Some of the sessions are missing audio or graphics and are being worked through. There is a backup drive of all the content that has been mailed to the editing team to hopefully resolve those that are missing graphics and/or audio.” We remain hopeful that everyone’s sessions will eventually be posted with all audio and graphics intact, but it is going to take more time than we would like.
Detacheerder gaat vaker 'binden' via vast contract [Computable]
Detacheerders gaan de komende tijd vaste contracten aanbieden om gedetacheerden aan zich te binden. Dat dienstverband moet helpen om krachten ‘vast te houden’ op de krappe arbeidsmarkt, stelt de branchevereniging voor detacheringsorganisaties.
ONLYOFFICE 7.1 is Out With New PDF Viewer, Slideshow Animations + More [OMG! Ubuntu!]
Fans of open source office software are in for a treat as a brand new version of ONLYOFFICE is now available to download featuring various improvements.
This post, ONLYOFFICE 7.1 is Out With New PDF Viewer, Slideshow Animations + More is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.
Kushal Das: OAuth Security Workshop 2022 [Planet Python]
Last week I attended OAuth Security Workshop at Trondheim, Norway. It was a 3-day single-track conference, where the first half of the days were pre-selected talks, and the second parts were unconference talks/side meetings. This was also my first proper conference after COVID emerged in the world.
After many years felt the whole excitement of being a total newbie in something and suddenly being able to meet all the people behind the ideas. I reached the conference hotel in the afternoon of day 0 and met the organizers as they were in the lobby area. That chat went on for a long, and as more and more people kept checking into the hotel, I realized that it was a kind of reunion for many of the participants. Though a few of them met at a conference in California just a week ago, they all were excited to meet again.
To understand how welcoming any community is, just notice how the community behaves towards new folks. I think the Python community stands high in this regard. And I am very happy to say the whole OAuth/OIDC/Identity-related community folks are excellent in this regard. Even though I kept introducing myself as the new person in this identity land, not even for a single time I felt unwelcome. I attended OpenID-related working group meetings during the conference, multiple hallway chats, or talked to people while walking around the beautiful city. Everyone was happy to explain things in detail to me. Even though most of the people there have already spent 5-15+ years in the identity world.
What happens in Trondheim, stays in Trondheim.
I generally do not attend many talks at conferences, as they get recorded. But here, the conference was a single track, and also, there were no recordings.
The first talk was related to formal verification, and this was the first time I saw those (scary in my mind) maths on the big screen. But, full credit to the speakers as they explained things in such a way so that even an average programmer like me understood each step. And after this talk, we jumped into the world of OAuth/OpenID. One funny thing was whenever someone mentioned some RFC number, we found the authors inside the meeting room.
In the second half, we had the GNAP master class from Justin Richer. And once again, the speaker straightforwardly explained such deep technical details so that everyone in the room could understand it.
Now, in the evening before, a few times, people mentioned that in heated technical details, many RFC numbers will be thrown at each other, though it was not that many for me to get too scared :)
I also managed to meet Roland for the first time. We had longer chats about the status of Python in the identity ecosystem and also about Identity Python. I took some notes about how we can improve the usage of Python in this, and I will most probably start writing about those in the coming weeks.
In multiple talks, researchers & people from the industry pointed out the mistakes made in the space from the security point of view. Even though, for many things, we have clear instructions in the SPECs, there is no guarantee that the implementors will follow them properly, thus causing security gaps.
At the end of day 1, we had a special Organ concert at the beautiful Trondheim Cathedral. On day 2, we had a special talk, “The Viking Kings of Norway”.
If you let me talk about my experience at the conference, I don’t think I will stop before 2 hours. It was so much excitement, new information, the whole feeling of going back into my starting days where I knew nothing much. Every discussion was full of learning opportunities (all discussions are anyway, but being a newbie is a different level of excitement) or the sadness of leaving Anwesha & Py back in Stockholm. This was the first time I was staying away from them after moving to Sweden.
Just before the conference ended, Aaron Parecki gave me a surprise gift. I spent time with it during the whole flight back to Stockholm.
This conference had the best food experience of my life for a conference. Starting from breakfast to lunch, big snack tables, dinners, or restaurant foods. In front of me, at least 4 people during the conference said, “oh, it feels like we are only eating and sometimes talking”.
Another thing I really loved to see is that the two primary conference organizers are university roommates who are continuing the friendship and journey in a very beautiful way. After midnight, standing outside of the hotel and talking about random things about life and just being able to see two longtime friends excited about similar things, it felt so nice.
I also want to thank the whole organizing team, including local organizers, Steinar, and the rest of the team did a superb job.
Python Morsels: Reading binary files in Python [Planet Python]
How can you read binary files in Python? And how can you read very large binary files in small chunks?
Table of contents
If we try to read a zip file using the built-in open
function in Python using the default read mode, we'll get an error:
>>> with open("exercises.zip") as zip_file:
... contents = zip_file.read()
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/usr/lib/python3.10/codecs.py", line 322, in de
code
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8e in position 11: invalid sta
rt byte
>>> with open("exercises.zip") as zip_file:
... contents = zip_file.read()
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/usr/lib/python3.10/codecs.py", line 322, in de
code
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8e in position 11: invalid sta
rt byte
We get an error because zip files aren't text files, they're binary files.
To read from a binary file, we need to open it with the mode rb
instead of the default mode of rt
:
>>> with open("exercises.zip", mode="rb") as zip_file:
... contents = zip_file.read()
...
>>> with open("exercises.zip", mode="rb") as zip_file:
... contents = zip_file.read()
...
When you read from a binary file, you won't get back strings.
You'll get back a bytes
object, also known as a byte string:
>>> with open("exercises.zip", mode="rb") as zip_file:
... contents = zip_file.read()
...
>>> type(contents)
<class 'bytes'>
>>> contents[:20]
b'PK\x03\x04\n\x00\x00\x00\x00\x00Y\x8e\x84T\x00\x00\x00\x00\x00\x00'
>>> with open("exercises.zip", mode="rb") as zip_file:
... contents = zip_file.read()
...
>>> type(contents)
<class 'bytes'>
>>> contents[:20]
b'PK\x03\x04\n\x00\x00\x00\x00\x00Y\x8e\x84T\x00\x00\x00\x00\x00\x00'
Byte strings don't have characters in them: they have bytes in them.
The bytes in a file won't help us very much unless we understand what they mean.
You probably won't read a …
Real Python: Linear Regression in Python [Planet Python]
You’re living in an era of large amounts of data, powerful computers, and artificial intelligence. This is just the beginning. Data science and machine learning are driving image recognition, development of autonomous vehicles, decisions in the financial and energy sectors, advances in medicine, the rise of social networks, and more. Linear regression is an important part of this.
Linear regression is one of the fundamental statistical and machine learning techniques. Whether you want to do statistics, machine learning, or scientific computing, there’s a good chance that you’ll need it. It’s best to build a solid foundation first and then proceed toward more complex methods.
By the end of this article, you’ll have learned:
Free Bonus: Click here to get access to a free NumPy Resources Guide that points you to the best tutorials, videos, and books for improving your NumPy skills.
Regression analysis is one of the most important fields in statistics and machine learning. There are many regression methods available. Linear regression is one of them.
Regression searches for relationships among variables. For example, you can observe several employees of some company and try to understand how their salaries depend on their features, such as experience, education level, role, city of employment, and so on.
This is a regression problem where data related to each employee represents one observation. The presumption is that the experience, education, role, and city are the independent features, while the salary depends on them.
Similarly, you can try to establish the mathematical dependence of housing prices on area, number of bedrooms, distance to the city center, and so on.
Generally, in regression analysis, you consider some phenomenon of interest and have a number of observations. Each observation has two or more features. Following the assumption that at least one of the features depends on the others, you try to establish a relation among them.
In other words, you need to find a function that maps some features or variables to others sufficiently well.
The dependent features are called the dependent variables, outputs, or responses. The independent features are called the independent variables, inputs, regressors, or predictors.
Regression problems usually have one continuous and unbounded dependent variable. The inputs, however, can be continuous, discrete, or even categorical data such as gender, nationality, or brand.
It’s a common practice to denote the outputs with 𝑦 and the inputs with 𝑥. If there are two or more independent variables, then they can be represented as the vector 𝐱 = (𝑥₁, …, 𝑥ᵣ), where 𝑟 is the number of inputs.
Typically, you need regression to answer whether and how some phenomenon influences the other or how several variables are related. For example, you can use it to determine if and to what extent experience or gender impacts salaries.
Regression is also useful when you want to forecast a response using a new set of predictors. For example, you could try to predict electricity consumption of a household for the next hour given the outdoor temperature, time of day, and number of residents in that household.
Regression is used in many different fields, including economics, computer science, and the social sciences. Its importance rises every day with the availability of large amounts of data and increased awareness of the practical value of data.
Linear regression is probably one of the most important and widely used regression techniques. It’s among the simplest regression methods. One of its main advantages is the ease of interpreting results.
When implementing linear regression of some dependent variable 𝑦 on the set of independent variables 𝐱 = (𝑥₁, …, 𝑥ᵣ), where 𝑟 is the number of predictors, you assume a linear relationship between 𝑦 and 𝐱: 𝑦 = 𝛽₀ + 𝛽₁𝑥₁ + ⋯ + 𝛽ᵣ𝑥ᵣ + 𝜀. This equation is the regression equation. 𝛽₀, 𝛽₁, …, 𝛽ᵣ are the regression coefficients, and 𝜀 is the random error.
Linear regression calculates the estimators of the regression coefficients or simply the predicted weights, denoted with 𝑏₀, 𝑏₁, …, 𝑏ᵣ. These estimators define the estimated regression function 𝑓(𝐱) = 𝑏₀ + 𝑏₁𝑥₁ + ⋯ + 𝑏ᵣ𝑥ᵣ. This function should capture the dependencies between the inputs and output sufficiently well.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python for Beginners: Set Difference in Python [Planet Python]
Sets are used to store unique objects. Sometimes, we might need to find the elements in a set that are not present in another given set. For this, we use the set difference operation. In this article, we will discuss what is set difference is. We will also discuss approaches to find the set difference in python.
When we are given two sets A and B. The set difference (A-B) is a set consisting of all the elements that belong to A but are not present in set B.
Similarly, the set difference (B-A) is a set consisting of all the elements that belong to B but are not present in set A.
Consider the following sets.
A={1,2,3,4,5,6,7}
B={5,6,7,8,9,10,11}
Here, set A-B will contain the elements 1,2,3, and 4 as these elements are present in set A but do not belong to set B. Similarly, set B-A
will contain the elements 8,9,10,11 as these elements are present in the set B but do not belong to set A .
Let us now discuss approaches to find set difference in python.
Given the sets A and B, if we want to find the the set difference A-B, we will first create an empty set named output_set
. After that, we will traverse set A using a for loop. While traversal, we will check for each element if they are present in the set B or not. If any element in set A doesn’t belong to the set B, we will add the element to the output_set
using the add()
method.
After execution of the for loop, we will get the set difference A-B in the output_set
. You can observe this in the following example.
A = {1, 2, 3, 4, 5, 6, 7}
B = {5, 6, 7, 8, 9, 10, 11}
output_set = set()
for element in A:
if element not in B:
output_set.add(element)
print("The set A is:", A)
print("The set B is:", B)
print("The set A-B is:", output_set)
Output:
The set A is: {1, 2, 3, 4, 5, 6, 7}
The set B is: {5, 6, 7, 8, 9, 10, 11}
The set A-B is: {1, 2, 3, 4}
If we want to find the the set difference B-A, we will traverse set B using a for loop. While traversal, we will check for each element if they are present in the set A or not. If any element in set B doesn’t belong to the set A, we will add the element to the output_set
using the add()
method.
After execution of the for loop, we will get the set difference B-A in the output_set
. You can observe this in the following example.
A = {1, 2, 3, 4, 5, 6, 7}
B = {5, 6, 7, 8, 9, 10, 11}
output_set = set()
for element in B:
if element not in A:
output_set.add(element)
print("The set A is:", A)
print("The set B is:", B)
print("The set B-A is:", output_set)
Output:
The set A is: {1, 2, 3, 4, 5, 6, 7}
The set B is: {5, 6, 7, 8, 9, 10, 11}
The set B-A is: {8, 9, 10, 11}
Python provides us with the difference()
method to find the set difference. The difference()
method, when invoked on set A, takes set B as input argument, calculates the set difference, and returns a set containing the elements in the set (A-B). You can observe this in the following example.
A = {1, 2, 3, 4, 5, 6, 7}
B = {5, 6, 7, 8, 9, 10, 11}
output_set = A.difference(B)
print("The set A is:", A)
print("The set B is:", B)
print("The set A-B is:", output_set)
Output:
The set A is: {1, 2, 3, 4, 5, 6, 7}
The set B is: {5, 6, 7, 8, 9, 10, 11}
The set A-B is: {1, 2, 3, 4}
In this article, we have discussed how to find the set difference in python. To learn more about sets, you can read this article on set comprehension in python. You might also like this article on list comprehension in python.
The post Set Difference in Python appeared first on PythonForBeginners.com.
Mike Driscoll: PyDev of the Week: Raza (Rython) Zaidi [Planet Python]
This week we welcome Raza Zaidi (@razacodes) as our PyDev of the Week! Raza is a content creator on Twitter and YouTube. You can learn about Python, data science, Django, and more on Raza's YouTube channel. Check it out when you get a chance!
Now let's spend a few moments getting to know Raza better!
Can you tell us a little about yourself (hobbies, education, etc):
Hi I’m Raza, Head of Dev Rel at thirdweb. An accountant by profession, but technology enthusiast by heart. Wildly passionate about emerging technologies and educating about them. I consider myself a below average developer and the walking proof that anyone can learn how to develop. Currently I’m focused on teaching developers on how to get started in Web3 through his Twitter, Tiktok and YouTube channel. By no means do I think that Python is the best programming language out there. In my spare time, I love to binge-watch anime.
Why did you start using Python?
I used to be head of a Data engineering platform and honestly I thought all these devs were so cool. I just started to hang out with the devs and asked them to use me as a test bunny. I learned how to spin up environments and run basic Python scripts and that’s how I got started with Python.
What other programming languages do you know and which is your favorite?
I know a bit javascript and solidity. I like solidity, but nothing beats Python.
What projects are you working on now?
A couple. I think Python is tremendously underrepresented in the web3 space. There are so many cool libraries and I’m working on content to bring more awareness. Besides that I’m diving into the beginner space again, a platform to help more people get started with Python. Stay tuned!
Which Python libraries are your favorite (core or 3rd party)?
How did you decide to become a content creator?
I guess I look at myself as a really bad programmer and use that power to simplify concepts for myself to understand. Then I just share that information. So it wasn’t a conscious decision, I kind of rolled into it.
What challenges have you had as a content creator and how did you overcome them?
I guess finding new ideas and structure. I’m learning a lot by engaging in the community and I need to do that more. But that’s great way to get inspiration.
Is there anything else you’d like to say?
Please reach out if you also want to spread the message about Python. I’m looking for like minded devs who want to contribute to beginner content and to help people get started to develop in Python!
Thanks for doing the interview, Raza!
The post PyDev of the Week: Raza (Rython) Zaidi appeared first on Mouse Vs Python.
ListenData: Only size-1 arrays can be converted to Python scalars [Planet Python]
Numpy is one of the most used module in Python and it is used in a variety of tasks ranging from creating array to mathematical and statistical calculations. Numpy also bring efficiency in Python programming. While using numpy you may encounter this error TypeError: only size-1 arrays can be converted to Python scalars
It is one of the frequently appearing error and sometimes it becomes a daunting challenge to solve it.
There are 5 method to solve this error
import numpy as np
x = np.array([2, 3.5, 4, 5.3, 27])
Let's convert to integer values (without decimals)np.int(x)
np.int()
is deprecated alias so you can simply useint(x)
but you will get the same error. It is because bothnp.int()
andint(x)
only accepts a single value not multiple values storing in an array. In other words you passed an array instead of scalar variable
.astype()
methodx.astype(int)
Outputarray([ 2, 3, 4, 5, 27])3.5 and 5.3 from the original array has been converted to 3 and 5.
In order to reflect changes in x
array, use the code below :
x = x.astype(int)
We’re Off — Ubuntu 22.10 Daily Builds Available to Download [OMG! Ubuntu!]
Download the latest Ubuntu 22.10 daily build to help test the next version of Ubuntu as 'Kinetic Kudu' development kicks into gear and new features added.
This post, We’re Off — Ubuntu 22.10 Daily Builds Available to Download is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.
"Morphex's Blogologue": JSON viewer for JSON database [Planet Python]
I was looking to get a little done on the ethereum-classic-taxman accounting tool today, and thought a bit outside-the-box, what could I need in there that isn't a direct priority.
The tool uses JSON databases, I switched a little while back because there could be security issues related to using a Python pickle database as the database backend.
An added benefit of using JSON is that its content is easy to view, and for example for debugging purposes, I thought it could be a good thing to have a tool that creates a view of the data that is easy to navigate and view. For example for debug purposes.
So I created this little script:
https://github.com/morphex/ethereum-classic-taxman/blob/main...
There are graphical JSON viewers on Ubuntu for example, but this little script can also have its output piped into a file, so that a database can be edited by hand in an editor. Or it could be piped to less on Linux/UNIX for viewing and searching.
On a related note, I saw some people lost their savings on the recent Luna/Terra crash. On the upside, I guess now is a chance to make a bet that the new variant with a massively higher amount of coins minted will succeed.
Podcast.__init__: Take Control Of Your Digital Photos By Running Your Own Smart Library Manager With LibrePhotos [Planet Python]
Digital cameras and the widespread availability of smartphones has allowed us all to generate massive libraries of personal photographs. Unfortunately, now we are all left to our own devices of how to manage them. While cloud services such as iPhotos and Google Photos are convenient, they aren't always affordable and they put your pictures under the control of large companies with their own agendas. LibrePhotos is an open source and self-hosted alternative to these services that puts you in control of your digital memories. In this episode the maintainer of LibrePhotos, Niaz Faridani-Rad, explains how he got involved with the project, the capabilities that it offers for managing your image library, and how to get your own instance set up to take back control of your pictures.
Digital cameras and the widespread availability of smartphones has allowed us all to generate massive libraries of personal photographs. Unfortunately, now we are all left to our own devices of how to manage them. While cloud services such as iPhotos and Google Photos are convenient, they aren’t always affordable and they put your pictures under the control of large companies with their own agendas. LibrePhotos is an open source and self-hosted alternative to these services that puts you in control of your digital memories. In this episode the maintainer of LibrePhotos, Niaz Faridani-Rad, explains how he got involved with the project, the capabilities that it offers for managing your image library, and how to get your own instance set up to take back control of your pictures.
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Zato Blog: Integrating with Jira APIs [Planet Python]
Continuing in the series of articles about newest cloud connections in Zato 3.2, this episode covers Atlassian Jira from the perspective of invoking its APIs to build integrations between Jira and other systems.
There are essentially two use modes of integrations with Jira:
The first case is usually more straightforward to conceptualize - you create a WebHook in Jira, point it to your endpoint and Jira invokes it when a situation of interest arises, e.g. a new ticket is opened or updated. I will talk about this variant of integrations with Jira in a future instalment as the current one is about the other situation, when it is your systems that establish connections with Jira.
The reason why it is more practical to first speak about the second form is that, even if WebHooks are somewhat easier to reason about, they do come with their own ramifications.
To start off, assuming that you use the cloud-based version of Jira (e.g. https://example.atlassian.net), you need to have a publicly available endpoint for Jira to invoke through WebHooks. Very often, this is undesirable because the systems that you need to integrate with may be internal ones, never meant to be exposed to public networks.
Secondly, your endpoints need to have a TLS certificate signed by a public Certificate Authority and they need to be accessible on port 443. Again, both of these are something that most enterprise systems will not allow at all or it may take months or years to process such a change internally across the various corporate departments involved.
Lastly, even if a WebHook can be used, it is not always a given that the initial information that you receive in the request from a WebHook will already contain everything that you need in your particular integration service. Thus, you will still need a way to issue requests to Jira to look up details of a particular object, such as tickets, in this way reducing WebHooks to the role of initial triggers of an interaction with Jira, e.g. a WebHook invokes your endpoint, you have a ticket ID on input and then you invoke Jira back anyway to obtain all the details that you actually need in your business integration.
The end situation is that, although WebHooks are a useful concept that I will write about in a future article, they may very well not be sufficient for many integration use cases. That is why I start with integration methods that are alternative to WebHooks.
If, in our case, we cannot use WebHooks then what next? Two good approaches are:
Scheduled jobs will let you periodically inquire with Jira about the changes that you have not processed yet. For instance, with a job definition as below:
Now, the service configured for this job will be invoked once per minute to carry out any integration works required. For instance, it can get a list of tickets since the last time it ran, process each of them as required in your business context and update a database with information about what has been just done - the database can be based on Redis, MongoDB, SQL or anything else.
Integrations built around scheduled jobs make most sense when you need to make periodic sweeps across a large swaths of business data, these are the “Give me everything that changed in the last period” kind of interactions when you do not know precisely how much data you are going to receive.
In the specific case of Jira tickets, though, an interesting alternative may be to combine scheduled jobs with IMAP connections:
The idea here is that when new tickets are opened, or when updates are made to existing ones, Jira will send out notifications to specific email addresses and we can take advantage of it.
For instance, you can tell Jira to CC or BCC an address such as zato@example.com. Now, Zato will still run a scheduled job but instead of connecting with Jira directly, that job will look up unread emails for it inbox (“UNSEEN” per the relevant RFC).
Anything that is unread must be new since the last iteration which means that we can process each such email from the inbox, in this way guaranteeing that we process only the latest updates, dispensing with the need for our own database of tickets already processed. We can extract the ticket ID or other details from the email, look up its details in Jira and the continue as needed.
All the details of how to work with IMAP emails are provided in the documentation but it would boil down to this:
# -*- coding: utf-8 -*-
# Zato
from zato.server.service import Service
class MyService(Service):
def handle(self):
conn = self.email.imap.get('My Jira Inbox').conn
for msg_id, msg in conn.get():
# Process the message here ..
process_message(msg.data)
# .. and mark it as seen in IMAP.
msg.mark_seen()
The natural question is - how would the “process_message” function extract details of a ticket from an email?
There are several ways:
Summary: Here goes description
Key: ABC-123
URL: https://example.atlassian.net/browse/ABC-123
Project: My Project
Issue Type: Improvement
Affects Versions: 1.3.17
Environment: Production
Reporter: Reporter Name
Assignee: Assignee Name
X-Atl-Mail-Meta: user_id="123456:12d80508-dcd0-42a2-a2cd-c07f230030e5",
event_type="Issue Created",
tenant="https://example.atlassian.net"
The first option is the most straightforward and likely the most convenient one - simply parse out the ticket ID and call Jira with that ID on input for all the other information about the ticket. How to do it exactly is presented in the next chapter.
Regardless of how we parse the emails, the important part is that we know that we invoke Jira only when there are new or updated tickets - otherwise there would not have been any new emails to process. Moreover, because it is our side that invokes Jira, we do not expose our internal system to the public network directly.
However, from the perspective of the overall security architecture, email is still part of the attack surface so we need to make sure that we read and parse emails with that in view. In other words, regardless of whether it is Jira invoking us or our reading emails from Jira, all the usual security precautions regarding API integrations and accepting input from external resources, all that still holds and needs to be part of the design of the integration workflow.
The above presented the ways in which we can arrive at the step of when we invoke Jira and now we are ready to actually do it.
As with other types of connections, Jira connections are created in Zato Dashboard, as below. Note that you use the email address of a user on whose behalf you connect to Jira but the only other credential is that user’s API token previously generated in Jira, not the user’s password.
With a Jira connection in place, we can now create a Python API service. In this case, we accept a ticket ID on input (called “a key” in Jira) and we return a few details about the ticket to our caller.
This is the kind of a service that could be invoked from a service that is triggered by a scheduled job. That is, we would separate the tasks, one service would be responsible for opening IMAP inboxes and parsing emails and the one below would be responsible for communication with Jira.
Thanks to this loose coupling, we make everything much more reusable - that the services can be changed independently is but one part and the more important side is that, with such separation, both of them can be reused by future services as well, without tying them rigidly to this one integration alone.
# -*- coding: utf-8 -*-
# stdlib
from dataclasses import dataclass
# Zato
from zato.common.typing_ import cast_, dictnone
from zato.server.service import Model, Service
# ###########################################################################
if 0:
from zato.server.connection.jira_ import JiraClient
# ###########################################################################
@dataclass(init=False)
class GetTicketDetailsRequest(Model):
key: str
@dataclass(init=False)
class GetTicketDetailsResponse(Model):
assigned_to: str = ''
progress_info: dictnone = None
# ###########################################################################
class GetTicketDetails(Service):
class SimpleIO:
input = GetTicketDetailsRequest
output = GetTicketDetailsResponse
def handle(self):
# This is our input data
input = self.request.input # type: GetTicketDetailsRequest
# .. create a reference to our connection definition ..
jira = self.cloud.jira['My Jira Connection']
# .. obtain a client to Jira ..
with jira.conn.client() as client: # type: JiraClient
# Cast to enable code completion
client = cast_('JiraClient', client)
# Get details of a ticket (issue) from Jira
ticket = client.get_issue(input.key)
# Observe that ticket may be None (e.g. invalid key), hence this 'if' guard ..
if ticket:
# .. build a shortcut reference to all the fields in the ticket ..
fields = ticket['fields']
# .. build our response object ..
response = GetTicketDetailsResponse()
response.assigned_to = fields['assignee']['emailAddress']
response.progress_info = fields['progress']
# .. and return the response to our caller.
self.response.payload = response
# ###########################################################################
The last remaining part is a REST channel to invoke our service through. We will provide the ticket ID (key) on input and the service will reply with what was found in Jira for that ticket.
We are now ready for the final step - we invoke the channel, which invokes the service which communicates with Jira, transforming the response from Jira to the output that we need:
$ curl localhost:17010/jira1 -d '{"key":"ABC-123"}'
{
"assigned_to":"zato@example.com",
"progress_info": {
"progress": 10,
"total": 30
}
}
$
And this is everything for today - just remember that this is just one way of integrating with Jira. The other one, using WebHooks, is something that I will go into in one of the future articles.
Start the tutorial to learn how to integrate APIs and build systems. After completing it, you will have a multi-protocol service representing a sample scenario often seen in banking systems with several applications cooperating to provide a single and consistent API to its callers.
Visit the support page if you need assistance.
Para aprender más sobre las integraciones de Zato y API en español, haga clic aquí.
Pour en savoir plus sur les intégrations API avec Zato en français, cliquez ici.
John Ludhi/nbshare.io: Save Pandas DataFrame as CSV file [Planet Python]
To save Panda's DataFrame in to CSV or Excel file, use following commands...
In this notebook, we will learn about saving Pandas Dataframe in to a CSV file.
For this excercise we will use dummy data.
import pandas as pd
Let us first create a Python list of dictionaries where each dictionary contains information about a trading stock.
data = [{'tickr':'intc', 'price':45, 'no_of_employees':100000}, {'tickr':'amd', 'price':85, 'no_of_employees':20000}]
Let us first convert above list to Pandas DataFrame using pd.DataFrame method.
df = pd.DataFrame(data)
df is Pandas Dataframe. Let us print it.
To learn more about Pandas and Dataframes, checkout following notebooks...
https://www.nbshare.io/notebooks/pandas/
print(df)
we can save this data frame using df.to_csv method as shown below. Note the first argument in below command is the file name and second argument 'index=False' will restrict Pandas from inserting row (or index) numbers for each row.
df.to_csv('data.csv', index=False)
Above command shoulde create a 'data.csv' file in our current directory. Let us check that using 'ls' command.
ls -lrt data.csv
yes indeed the file is there. Let us check the contents of this file using Unix 'cat' command.
Note i am running this notebook on Linux machine that is why i am able to run these unix Commands from the Jupyter notebook.
cat data.csv
As we see above, the content is comma separated list of values. Instead of comma, we can use any other separator using the "sep" argument.
df.to_csv('data.csv', index=False,sep="|")
cat data.csv
Note: There are lot of options which df.to_csv can take. Checkout the complete list below...
df.to_csv(
path_or_buf: 'FilePathOrBuffer[AnyStr] | None' = None,
sep: 'str' = ',',
na_rep: 'str' = '',
float_format: 'str | None' = None,
columns: 'Sequence[Hashable] | None' = None,
header: 'bool_t | list[str]' = True,
index: 'bool_t' = True,
index_label: 'IndexLabel | None' = None,
mode: 'str' = 'w',
encoding: 'str | None' = None,
compression: 'CompressionOptions' = 'infer',
quoting: 'int | None' = None,
quotechar: 'str' = '"',
line_terminator: 'str | None' = None,
chunksize: 'int | None' = None,
date_format: 'str | None' = None,
doublequote: 'bool_t' = True,
escapechar: 'str | None' = None,
decimal: 'str' = '.',
errors: 'str' = 'strict',
storage_options: 'StorageOptions' = None,
) -> 'str | None'
Ned Batchelder: Cairo in Jupyter, better [Planet Python]
I finally came up with a way I like to create PyCairo drawings in a Jupyter notebook.
A few years ago I wrote here about how to draw Cairo SVG in a Jupyter notebook. That worked, but wasn’t as convenient as I wanted. Now I have a module that manages the PyCairo contexts for me. It automatically handles the displaying of SVG and PNG directly in the notebook, or lets me write them to a file.
The module is drawing.py.
The code looks like this (with a sample drawing copied from the PyCairo docs):
from drawing import cairo_context
def demo():
with cairo_context(200, 200, format="svg") as context:
x, y, x1, y1 = 0.1, 0.5, 0.4, 0.9
x2, y2, x3, y3 = 0.6, 0.1, 0.9, 0.5
context.scale(200, 200)
context.set_line_width(0.04)
context.move_to(x, y)
context.curve_to(x1, y1, x2, y2, x3, y3)
context.stroke()
context.set_source_rgba(1, 0.2, 0.2, 0.6)
context.set_line_width(0.02)
context.move_to(x, y)
context.line_to(x1, y1)
context.move_to(x2, y2)
context.line_to(x3, y3)
context.stroke()
return context
demo()
Using demo()
in a notebook cell will draw the SVG. Nice.
The key to making this work is Jupyter’s special methods _repr_svg_, _repr_png_, and a little _repr_html_ thrown in also.
The code is at drawing.py. I created it so that I could play around with Truchet tiles:
Python Software Foundation: The 2022 Python Language Summit: Performance Improvements by the Faster CPython team [Planet Python]
Python 3.11, if you haven’t heard, is fast. Over the past year, Microsoft has funded a team – led by core developers Mark Shannon and Guido van Rossum – to work full-time on making CPython faster. With additional funding from Bloomberg, and help from a wide range of other contributors from the community, the results have borne fruit. On the pyperformance benchmarks at the time of the beta release, Python 3.11 was around 1.25x faster than Python 3.10, a phenomenal achievement.
But there is more still to be done. At the 2022 Python Language Summit, Mark Shannon presented on where the Faster CPython project aims to go next. The future’s fast.
The first problem Shannon raised was a problem of measurements. In order to know how to make Python faster, we need to know how slow Python is currently. But how slow at doing what, exactly?
Good benchmarks are vital for a project that aims to optimise Python for general usage. For that, the Faster CPython team needs the help of the community at large. The project “needs more benchmarks,” Shannon said – it needs to understand more precisely what the user base at large is using Python for, how they’re doing it, and what makes it slow at the moment (if it is slow!).
A benchmark, Shannon explained, is “just a program that we can time”. Anybody with a benchmark – or even just a suggestion for a benchmark! – that they believe is representative of a larger project they’re working on is invited to submit them to the issue tracker at the python/pyperformance repository on GitHub.
Nonetheless, the Faster CPython team has plenty to be getting on with in the meantime.
Much of the optimisation work in 3.11 has been achieved through the implementation of PEP 659, a “specializing adaptive interpreter”. The adaptive interpreter that Shannon and his team have introduced tracks individual bytecodes at various points in a program’s execution. When it spots an opportunity, a bytecode may be “quickened”: this means that a slow bytecode, that can do many things, is replaced by the interpreter with a more specialised bytecode that is very good at doing one specific thing. The work on PEP 659 has now largely been done, but major parts, such as dynamic specialisations of for-loops and binary operations, are still to be completed.
Shannon noted that Python also has essentially the same memory consumption in 3.11 as it did in 3.10. This is something he’d like to work on: a smaller memory overhead generally means fewer reference-counting operations in the virtual machine, a lower garbage-collection overhead, and smoother performance as a result of it all.
Another big remaining avenue for optimisations is the question of C extensions. CPython’s easy interface with C is its major advantage over other Python implementations such as PyPy, where incompatibilities with C extensions are one of the biggest hurdles for adoption by users. The optimisation work that has been done in CPython 3.11 has largely ignored the question of extension modules, but Shannon now wants to open up the possibility of exposing low-level function APIs to the virtual machine, reducing the overhead time of communicating between Python code and C code.
Lastly, but certainly not least, Shannon said, “everybody wants a JIT compiler… even if it doesn’t make sense yet”.
A JIT (“just-in-time”) compiler is the name given for a compiler that dynamically detects where performance bottlenecks exist in a program as the program is running. Once these bottlenecks have been identified, the JIT compiles these parts of the program on-the-fly into native machine code in order to speed things up. It’s a similar idea to Shannon’s PEP 659, but goes much further, since the specialising adaptive interpreter never goes beyond the bytecode level.
The idea of using a JIT compiler for Python is hardly new. PyPy’s JIT compiler is the major source of the large performance gains the project has over CPython in some areas. Third-party projects, such as pyjion and numba, bring just-in-time compilation to CPython that’s just a pip install
away. Integrating a JIT into the core of CPython, however, would be materially different.
Shannon has historically voiced scepticism about the wisdom of introducing a JIT compiler into CPython itself, and said that work on introducing one is still some way off. A JIT, according to Shannon, will probably not arrive until 3.13 at the earliest, given the amount of lower-hanging fruit that is still to be worked on. The first step towards a JIT, he explained, would be to implement a trace interpreter, which would allow for better testing of concepts and lay the groundwork for future changes.
The gains Shannon’s team has achieved are hugely impressive, and likely to benefit the community as a whole in a profound way. But various problems lie on the horizon. Sam Gross’s proposal for a version of CPython without the Global Interpreter Lock (the nogil
fork) has potential for speeding up multithreaded Python code in very different ways to the Faster CPython team’s work – but it could also be problematic for some of the optimisations that have already been implemented, many of which assume that the GIL exists. Eric Snow’s dream of achieving multiple subinterpreters within a single process, meanwhile, will have a smaller performance impact on single-threaded code compared to nogil
, but could still create some minor complications for Shannon’s team.
Juri Pakaste: Creating icons in Xcode playgrounds [Planet Python]
I'm no good at drawing. I have Affinity Designer and I like it well enough, but it requires more expertise than I have, really. Usually when I want to draw things, I prefer to retreat back to code.
Xcode playgrounds are pretty OK for writing your graphics code. Select your drawing technology of choice to create an image, create a view that displays it, make it the live view with PlaygroundPage.current.setLiveView
and you're done. Well, almost. How do you get the image out of there?
Say you're creating icons for an iOS project. You want a bunch of variously sized versions of the same icon (I'm assuming here you aren't finessing the different versions too much, or otherwise you wouldn't be reading a tutorial on how to generate images in code), and you want to get them into an asset catalog in Xcode. Xcode's asset catalog editor can accept dragged files, so that seems like a something we could try enable.
SwiftUI makes it really easy.
Start with a function that draws the icon into a CGImage. This one just draws a purplish rectangle. It won't win any ADAs, but it'll serve for this tutorial:
func makeImage(size: CGSize) -> CGImage {
let ctx = CGContext(
data: nil,
width: Int(size.width),
height: Int(size.height),
bitsPerComponent: 8,
bytesPerRow: 0,
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue
)!
let rect = CGRect(origin: .zero, size: size)
ctx.setFillColor(red: 0.9, green: 0.4, blue: 0.6, alpha: 1.0)
ctx.fill(rect)
let image = ctx.makeImage()!
return image
}
Next define a bunch of values for the icon sizes that Xcode likes. As of Xcode 13 and iOS 15, something like this is a good representation of what you need:
enum IconSize: CGFloat {
case phoneNotification = 20.0
case phoneSettings = 29.0
case phoneSpotlight = 40.0
case phoneApp = 60.0
case padApp = 76.0
case padProApp = 83.5
}
extension IconSize: CustomStringConvertible {
var description: String {
switch self {
case .phoneNotification: return "iPhone/iPad Notification (\(self.rawValue))"
case .phoneSettings: return "iPhone/iPad Settings (\(self.rawValue))"
case .phoneSpotlight: return "iPhone/iPad Spotlight (\(self.rawValue))"
case .phoneApp: return "iPhone App (\(self.rawValue))"
case .padApp: return "iPad App (\(self.rawValue))"
case .padProApp: return "iPad Pro App (\(self.rawValue))"
}
}
}
Then define a struct that holds one extra bit of information: the scale we're working at.
struct IconVariant {
let size: IconSize
let scale: CGFloat
var scaledSize: CGSize {
let scaled = self.scale * self.size.rawValue
return CGSize(width: scaled, height: scaled)
}
}
extension IconVariant: CustomStringConvertible {
var description: String { "\(self.size) @ \(self.scale)x" }
}
extension IconVariant: Identifiable {
var id: String { self.description }
}
The descriptions are useful for you, the human; the Identifiable
conformance will be helpful when you set up a SwiftUI view showing the variants.
Next define all the variants you want:
let icons: [IconVariant] = [
IconVariant(size: .phoneNotification, scale: 2),
IconVariant(size: .phoneNotification, scale: 3),
IconVariant(size: .phoneSettings, scale: 2),
IconVariant(size: .phoneSettings, scale: 3),
IconVariant(size: .phoneSpotlight, scale: 2),
IconVariant(size: .phoneSpotlight, scale: 3),
IconVariant(size: .phoneApp, scale: 2),
IconVariant(size: .phoneApp, scale: 3),
IconVariant(size: .phoneNotification, scale: 1),
IconVariant(size: .phoneSettings, scale: 1),
IconVariant(size: .phoneSpotlight, scale: 1),
IconVariant(size: .padApp, scale: 1),
IconVariant(size: .padApp, scale: 2),
IconVariant(size: .padProApp, scale: 2),
]
Then let's start work on getting those variants on screen. We'll use a simple SwiftUI view with stacks for it; it won't be pretty, but it'll do what's needed.
struct IconView: View {
var body: some View {
VStack {
ForEach(icons) { icon in
HStack {
let cgImage = makeImage(size: icon.scaledSize)
Text(String(describing: icon))
Image(cgImage, scale: 1.0, label: Text(String(describing: icon)))
}
}
}
}
}
PlaygroundPage.current.setLiveView(IconView())
As promised, functionality over form:
Now we need just the glue to enable dragging. Add a CGImage extension that makes it easier to export the image as PNG data:
extension CGImage {
var png: Data? {
guard let mutableData = CFDataCreateMutable(nil, 0),
let destination = CGImageDestinationCreateWithData(mutableData, "public.png" as CFString, 1, nil)
else { return nil }
CGImageDestinationAddImage(destination, self, nil)
guard CGImageDestinationFinalize(destination) else { return nil }
return mutableData as Data
}
}
To make the images in the view draggable, you'll need to use the onDrag
view modifier. It requires a function that returns a NSItemProvider
. The nicest way to create one is probably with a custom class that conforms to NSItemProviderWriting
. Something like this:
final class IconProvider: NSObject, NSItemProviderWriting {
struct UnrecognizedTypeIdentifierError: Error {
let identifier: String
}
let image: CGImage
init(image: CGImage) {
self.image = image
}
func loadData(
withTypeIdentifier typeIdentifier: String,
forItemProviderCompletionHandler completionHandler: @escaping (Data?, Error?) -> Void
) -> Progress? {
guard typeIdentifier == "public.png" else {
completionHandler(nil, UnrecognizedTypeIdentifierError(identifier: typeIdentifier))
return nil
}
completionHandler(self.image.png, nil)
// Progress: all done in one step.
let progress = Progress(parent: nil)
progress.totalUnitCount = 1
progress.completedUnitCount = 1
return progress
}
static var writableTypeIdentifiersForItemProvider: [String] {
["public.png"]
}
}
And then the last thing needed is the onDrag
handler. Add it to the Image
line in the IconView
you created earlier.
Image(cgImage, scale: 1.0, label: Text(String(describing: icon)))
.onDrag {
NSItemProvider(object: IconProvider(image: cgImage))
}
Refresh the playground preview and the images are waiting for you to drag them into the asset catalog.
Mirek Długosz: Announcing Kustosz [Planet Python]
I’m happy to announce Kustosz, a new feed reader that aims to help you focus on worthwhile content.
These days, many open source RSS readers still try to fill the void left out by Google Reader - their main goal is to provide familiar visuals and user experience. Meanwhile, proprietary feed readers incorporate machine learning techniques to guide you through the vast ocean of content and try to “relieve” you from the burden of “reading everything”.
I find both of these approaches problematic. Google Reader has been discontinued a decade ago. Everyone who wanted “a Google Reader alternative” has settled on something else or moved on a long time ago. There’s no reason for a new project to try to solve this exact problem.
While it’s tempting to save time and mental capacity by allowing computers to decide what content you should focus on, this is also a fast lane to filter bubbles. Today, we are intimately familiar with psychological, social, and political problems created by employing this approach at scale. It’s more important than ever to let people decide what they want to read.
That’s why Kustosz comes from a different angle. Its goal is to make it easy and convenient to read content that you find worthwhile.
Kustosz provides easy to use, straightforward and distraction-less interface, where your content takes a central place. It works straight in your browser, so you don’t have to install additional software on your computer. It fits your device, no matter if you use a phone, tablet, or desktop with an external monitor.
We are all busy, and sometimes we can’t read the entire article in one go. That’s why Kustosz tracks how much you have read and lets you pick up right where you left at any time, on any device.
To enable you to make the most use of Kustosz features, it automatically downloads full article content from the source website. It doesn’t matter if the feed author publishes only article lead, you don’t have to leave Kustosz unless you choose to.
Kustosz is not discouraged when the article you want to read is not in the site’s RSS feed. You can add any web page manually.
While Kustosz doesn’t make any decisions for you, it’s here to automate menial tasks. It has a built-in duplicate detector that can automatically hide articles you have already seen. It provides a flexible and powerful filter system that you can use to automatically hide articles you are not interested in.
And the best part is, your data is yours. Kustosz is open source and hosted on your server. This ensures that you are in control of Kustosz, its data and what it does.
From the technical point of view, Kustosz utilizes familiar modern client-server architecture. Frontend is Vue.js (v3) web application that relies heavily on features that became widely supported in recent years, like CSS grid or JavaScript Intersection Observer API.
Backend is Django web application that serves REST-like API with help of Django REST framework. Most of the work is done in background tasks managed by Celery. All the hard work of accessing and processing RSS / Atom feed files is done by the excellent reader library. I want to stress how immensely grateful I am to Adrian for creating and maintaining this exemplary piece of code.
I’ve been using Kustosz as my primary feed reader for about a month now, and I consider it pretty stable and fit for purpose. But it’s software, so of course it has bugs - especially in contexts distinctly different from mine. If you encounter them, feel free to create an issue or submit PR at GitHub.
As is true for most of the software, I don’t think Kustosz will ever be truly finished. Right now it’s primarily concerned with text content available on public websites, but my big dream is to support various content sources - things like email newsletters and social media sites immediately spring to mind. On the other hand, I don’t think I want to re-implement RSS Bridge just for a sake of it.
Another dream of mine is to provide an integrated notepad. Reading is great, but truly worthwhile articles are thought-provoking and invitation to the conversation. Active reading demands you write down what you understood. It would be great if you could do that from the convenience of a single application.
There’s also a little more mundane work to do - things like user interface translation framework, WebSub protocol support, and improving documentation.
Nonetheless, if Kustosz sounds like a tool you could use, please head on to Kustosz website or documentation page, where you will find system requirements and installation instructions. There’s also a container image you may use to quickly spin up an instance for testing.
Kay Hayen: Compile Python on Windows [Planet Python]
Looking to create an executable from Python script? Let me show you the full steps to achieve it on Windows.
The simple way to add Python to the PATH
do this is to check the box
during installation of CPython. You just download python and install or modify Python by
checking the box in the installer:
This box is not enabled by default. You can also manually add the Python
installation path to PATH
environment variable.
Note
You do not strictly have to execute this step, you can also replace
python
with just the absolute path, e.g.
C:\Users\YourName\AppData\Local\Programs\Python\Python310\python.exe
but that can become inconvenient.
This can be cmd.exe
or Windows Terminal, or from an IDE like Visual
Code or PyCharm. And then type python
to verify the correct
installation, and exit
to leave the Python prompt again.
Now install Nuitka with the following command.
python -m pip install nuitka
Now run your program from the terminal. Convince yourself that everything is working.
python fancy-program.py
Note
If it’s a GUI program, make sure it has a .pyw
suffix. That is
going to make Python know it’s one.
python -m nuitka --onefile fancy-program.py
In case of a terminal program, add one of many options that Nuitka has to adapt for platform specifics, e.g. program icon, and so on.
python -m nuitka --onefile --windows-disable-console fancy-program.py
This will create fancy-program.exe
.
Your executable should appear right near fancy-program.py
and
opening the explorer or running fancy-program.exe
from the Terminal
should be good.
fancy-program.exe
Python Software Foundation: The 2022 Python Language Summit: Python in the browser [Planet Python]
Python can be run on many platforms: Linux, Windows, Apple Macs, microcomputers, and even Android devices. But it’s a widely known fact that, if you want code to run in a browser, Python is simply no good – you’ll just have to turn to JavaScript.
Now, however, that may be about to change. Over the course of the last two years, and following over 60 CPython pull requests (many attached to GitHub issue #84461), Core Developer Christian Heimes and contributor Ethan Smith have achieved a state where the CPython main
branch can now be compiled to WebAssembly. This opens up the possibility of being able to run arbitrary Python programs clientside inside your web browser of choice.
At the 2022 Python Language Summit, Heimes gave a talk updating the attendees of the progress he’s made so far, and where the project hopes to go next.
WebAssembly (or “WASM”, for short), Heimes explained, is a low-level assembly-like language that can be as fast as native machine code. Unlike your usual machine code, however, WebAssembly is independent from the machine it is running on. Instead, the core principle of WebAssembly is that it can be run anywhere, and can be run in a completely isolated environment. This leads to it being a language that is extremely fast, extremely portable, and provides minimal security risks – perfect for running clientside in a web browser.
After much work, CPython now cross-compiles to WebAssembly using emscripten through the --with-emscripten-target=browser
flag. The CPython test suite now also passes on emscripten builds, and work is going towards adding a buildbot to CPython’s fleet of automatic robot testers, to ensure this work does not regress in the future.
Users who want to try out Python in the browser can try it out at https://repl.ethanhs.me/. The work opens up exciting possibilities of being able to run PyGame clientside and adding Jupyter bindings.
It should be noted that cross-compiling to WebAssembly is still highly experimental, and not yet officially supported by CPython. Several important modules in the Python standard library are not currently included in the bundled package produced when --with-emscripten-target=browser
is specified, leading to a number of tests needing to be skipped in order for the test suite to pass.
Nonetheless, the future’s bright. Only a few days after Heimes’s talk, Peter Wang, CEO at Anaconda, announced the launch of PyScript in a PyCon keynote address. PyScript is a tool that allows Python to be called from within HTML, and to call JavaScript libraries from inside Python code – potentially enabling a website to be written entirely in Python.
PyScript is currently built on top of Pyodide, a third-party project bringing Python to the browser, on which work began before Heimes started his work on the CPython main
branch. With Heimes’s modifications to Python 3.11, this effort will only become easier.
Python Software Foundation: The 2022 Python Language Summit: Lightning talks [Planet Python]
These were a series of short talks, each lasting around five minutes.
Read the rest of the 2022 Python Language Summit coverage here.
Carl Meyer, an engineer at Instagram, presented on a proposal that has since blossomed into PEP 690: lazy imports, a feature that has already been implemented in Cinder, Instagram’s performance-optimised fork of CPython 3.8.
What’s a lazy import? Meyer explained that the core difference with lazy imports is that the import does not happen until the imported object is referenced.
In the following Python module, spam.py
, with lazy imports activated, the module eggs
would never in fact be imported since eggs
is never referenced after the import:
# spam.py
import sys
import eggs
def main():
print("Doing some spammy things.")
sys.exit(0)
if __name__ == "__main__":
main()
And in this Python module, ham.py
, with lazy imports activated, the function bacon_function
is imported – but only right at the end of the script, after we’ve completed a for-loop that’s taken a very long time to finish:
# ham.pyimport sysimport timefrom bacon import bacon_function
def main():
for _ in range(1_000_000_000):
print('Doing hammy things')
time.sleep(1)
bacon_function() sys.exit(0)
if __name__ == "__main__":
main()
Meyer revealed that the Instagram team’s work on lazy imports had resulted in startup time improvements of up to 70%, memory usage improvements of up to 40%, and the elimination of almost all import cycles within their code base. (This last point will be music to the ears of anybody who has worked on a Python project larger than a few modules.)
Meyer also laid out a number of costs to having lazy imports. Lazy imports create the risk that ImportError
(or any other error resulting from an unsuccessful import) could potentially be raised… anywhere. Import side effects could also become “even less predictable than they already weren’t”.
Lastly, Meyer noted, “If you’re not careful, your code might implicitly start to require it”. In other words, you might unexpectedly reach a stage where – because your code has been using lazy imports – it now no longer runs without the feature enabled, because your code base has become a huge, tangled mess of cyclic imports.
Python users who have opinions either for or against the proposal are encouraged to join the discussion on discuss.python.org.
This was less of a talk, and more of an announcement.
Historically, if somebody wanted to make a significant change to CPython, they were required to post on the python-dev mailing list. The Steering Council now views the alternative venue for discussion, discuss.python.org, to be a superior forum in many respects.
Thomas Wouters, Core Developer and Steering Council member, said that the Steering Council was planning on loosening the requirements, stated in several places, that emails had to be sent to python-dev in order to make certain changes. Instead, they were hoping that discuss.python.org would become the authoritative discussion forum in the years to come.
Kevin Modzelewski, core developer of the Pyston project, gave a short presentation on ways forward for CPython optimisations. Pyston is a performance-oriented fork of CPython 3.8.12.
Modzelewski argued that CPython needed better benchmarks; the existing benchmarks on pyperformance were “not great”. Modzelewski also warned that his “unsubstantiated hunch” was that the Faster CPython team had already accomplished “greater than one-half” of the optimisations that could be achieved within the current constraints. Modzelewski encouraged the attendees to consider future optimisations that might cause backwards-incompatible behaviour changes.
This was another short announcement from Thomas Wouters on behalf of the Steering Council. After sponsorship from Google providing funding for the first ever CPython Developer-In-Residence (Łukasz Langa), Meta has provided sponsorship for a second year. The Steering Council also now has sufficient funds to hire a second Developer-In-Residence – and attendees were notified that they were open to the idea of hiring somebody who was not currently a core developer.
Larry Hastings, CPython core developer, gave a brief presentation on a proposal he had sent round to the python-dev mailing list in recent days: a “forward class” declaration that would avoid all issues with two competing typing
PEPs: PEP 563 and PEP 649. In brief, the proposed syntax would look something like this:
forward class X()
continue class X:
# class body goes here
def __init__(self, key):
self.key = key
In theory, according to Hastings, this syntax could avoid issues around runtime evaluation of annotations that have plagued PEP 563, while also circumventing many of the edge cases that unexpectedly fail in a world where PEP 649 is implemented.
The idea was in its early stages, and reaction to the proposal was mixed. The next day, at the Typing Summit, there was more enthusiasm voiced for a plan laid out by Carl Meyer for a tweaked version of Hastings’s earlier attempt at solving this problem: PEP 649.
Samuel Colvin, maintainer of the Pydantic library, gave a short presentation on a proposal (recently discussed on discuss.python.org) to reduce name clashes between field names in a subclass, and method names in a base class.
The problem is simple. Suppose you’re a maintainer of a library, whatever_library
. You release Version 1 of your library, and one user start to use your library to make classes like the following:
from whatever_library import BaseModel
class Farmer(BaseModel):
name: str
fields: list[str]
Both the user and the maintainer are happy, until the maintainer releases Version 2 of the library. Version 2 adds a method, .fields()
to BaseModel, which will print out all the field names of a subclass. But this creates a name clash with your user’s existing code, wich has fields
as the name of an instance attribute rather than a method.
Colvin briefly sketched out an idea for a new way of looking up names that would make it unambiguous whether the name being accessed was a method or attribute.
class Farmer(BaseModel):
$name: str
$fields: list[str]
farmer = Farmer(name='Jones', fields=['meadow', 'highlands'])
print(farmer.$fields) # -> ['meadow', 'highlands']
print(farmer.fields()) # -> ['name', 'fields']
Kiesraad kiest Paragon voor verkiezingssoftware [Computable]
In opdracht van de Kiesraad gaat Paragon Customer Communications software leveren voor de vaststelling van de verkiezingsuitslag en indiening van de kandidatenlijsten. Het bedrijf uit Alphen aan den Rijn sleepte deze week de aanbesteding voor ontwikkeling,...
Rechter: politie mag hacken en pgp-berichten inkijken [Computable]
De politie mag systemen voor het versleutelen van informatie kraken en live meekijken in die versleutelde berichten. Dat oordeelt een Amsterdamse rechter in een zaak waarin een verdachte is veroordeeld op basis van EncroChat-data. Advocaten zijn...
Musk drukt op pauzeknop bij Twitter-deal [Computable]
De onderhandelingen tussen Twitter en Elon Musk zijn tijdelijk opgeschort. Musk, die het sociale medium wil overnemen voor 44 miljard dollar, wacht het onderzoek naar nep- en spam-accounts even af. Er was al veel gedoe over...
Advies: wacht met 3,5 GHz tot Inmarsat weg is [Computable]
Het duurt waarschijnlijk tot eind 2023 voordat de 3,5-GHz-frequentieband beschikbaar komt voor openbare mobiele-communicatiediensten. Er is weliswaar veel vraag naar extra frequentieruimte, maar op de daarvoor afgesproken 3,5-GHz-band kan dat storen met noodoproepen van de lucht-...
Meer samenwerking tussen privacywaakhonden [Computable]
De Autoriteit Persoonsgegevens (AP) en haar Europese evenknieën werken steeds vaker samen. Afgelopen jaar gebeurde dat in ruim vijfhonderd internationale onderzoeken, waarvan 141 werden afgerond. Dat meldt de European Data Protection Board (EDPB), het Europese samenwerkingsverband...
Python for Beginners: Convert List of Lists to CSV File in Python [Planet Python]
Lists are one of the most frequently used data structures in python. In this article, we will discuss how we can convert a list of lists to a CSV file in python.
The csv module provides us with different methods to perform various operations on a CSV file. To convert a list of lists to csv in python, we can use the csv.writer() method along with the csv.writerow() method. For this, we will use the following steps.
After execution of the for loop, the data from the list will be added to the CSV file. To save the data, you should close the file using the close() method. Otherwise, no changes will be saved to the csv file.
The source code to convert a list of lists to a csv file using the csv.writer() method is as follows.
import csv
listOfLists = [["Aditya", 1, "Python"], ["Sam", 2, 'Java'], ['Chris', 3, 'C++'], ['Joel', 4, 'TypeScript']]
print("THe list of lists is:")
print(listOfLists)
myFile = open('demo_file.csv', 'w')
writer = csv.writer(myFile)
writer.writerow(['Name', 'Roll', 'Language'])
for data_list in listOfLists:
writer.writerow(data_list)
myFile.close()
myFile = open('demo_file.csv', 'r')
print("The content of the csv file is:")
print(myFile.read())
myFile.close()
Output:
THe list of lists is:
[['Aditya', 1, 'Python'], ['Sam', 2, 'Java'], ['Chris', 3, 'C++'], ['Joel', 4, 'TypeScript']]
The content of the csv file is:
Name,Roll,Language
Aditya,1,Python
Sam,2,Java
Chris,3,C++
Joel,4,TypeScript
In this article, we have discussed an approach to convert a list of lists to csv file in python. In these approaches, each list will be added to the csv file irrespective of whether it has the same number of elements as compared to the columns in the csv or not. Thus it is advised to make sure that each element should have the same number of element. Also, You should make sure that the order of element present in the lists should be same. Otherwise, the data appended to the csv file will become inconsistent and will lead to errors.
To know more about lists in python, you can read this article on list comprehension in python. You might also like this article on dictionary comprehension in python.
The post Convert List of Lists to CSV File in Python appeared first on PythonForBeginners.com.
Test and Code: 187: Teaching Web Development, including Front End Testing [Planet Python]
When you are teaching someone web development skills, when is the right time to start teaching code quality and testing practices?
Karl Stolley believes it's never too early. Let's hear how he incorporates code quality in his courses.
Our discussion includes:
Karl is also writing a book on WebRTC, so we jump into that a bit too.
Special Guest: Karl Stolley.
Sponsored By:
Links:
<p>When you are teaching someone web development skills, when is the right time to start teaching code quality and testing practices?</p> <p>Karl Stolley believes it's never too early. Let's hear how he incorporates code quality in his courses.</p> <p>Our discussion includes:</p> <ul> <li>starting people off with good dev practices and tools</li> <li>linting</li> <li>html and css validation</li> <li>visual regression testing</li> <li>using local dev servers, including https</li> <li>incorporating testing with git hooks</li> <li>testing to aid in css optimization and refactoring</li> <li>Backstop</li> <li>Nightwatch</li> <li>BrowserStack</li> <li>the tree legged stool of learning and progressing as a developer: testing, version control, and documentation</li> </ul> <p>Karl is also writing a book on WebRTC, so we jump into that a bit too.</p><p>Special Guest: Karl Stolley.</p><p>Sponsored By:</p><ul><li><a href="https://www.patreon.com/testpodcast" rel="nofollow">Patreon Supporters</a>: <a href="https://www.patreon.com/testpodcast" rel="nofollow">Help support the show with as little as $1 per month and be the first to know when new episodes come out.</a></li><li><a href="https://pythontest.com/pytest-book/" rel="nofollow">Python Testing with pytest, 2nd edition</a>: <a href="https://pythontest.com/pytest-book/" rel="nofollow">The fastest way to learn pytest and practical testing practices.</a></li></ul><p>Links:</p><ul><li><a href="https://garris.github.io/BackstopJS/" title="Backstop" rel="nofollow">Backstop</a></li><li><a href="https://nightwatchjs.org/" title="Nightwatch" rel="nofollow">Nightwatch</a></li><li><a href="https://www.browserstack.com/" title="BrowserStack" rel="nofollow">BrowserStack</a></li><li><a href="https://pragprog.com/titles/ksrtc/programming-webrtc/" title="Programming WebRTC: Build Real-Time Streaming Applications for the Web by Karl Stolley" rel="nofollow">Programming WebRTC: Build Real-Time Streaming Applications for the Web by Karl Stolley</a></li></ul>Skills-softwarebedrijf AG5 haalt 1,2 miljoen op [Computable]
Het Amsterdamse AG5 heeft in een investeringsronde 1,2 míljoen euro opgehaald bij tech-investeerder Peak. Met de investering zal AG5 zijn zogeheten skills management software verder ontwikkelen en wereldwijd op de markt brengen.
Centric vernieuwt bedrijfsvoering met IFS Cloud [Computable]
Centric heeft voor de bedrijfsvoering gekozen voor IFS Cloud als nieuw enterprise resource planning (erp)-software. IFS-partner Eqeep zal de oplossing implementeren, ondersteunen én uitrollen over alle activiteiten van Centric in de tien Europese landen waar de...
3d-printerbedrijven MakerBot en Ultimaker fuseren [Computable]
Het Amerikaanse MakerBot en het Nederlandse Ultimaker, twee aanbieders op het gebied van desktop 3d-printen, kondigen een fusie aan. De samensmelting wordt ondersteund door de bestaande investeerders NPM Capital (Ultimaker) en Stratasys (MakerBot). Zij steken 62,4...
15 miljoen voor Leuvense chipbouwer Pharrowtech [Computable]
Pharrowtech, een ontwerper van chips voor draadloze communicatie uit Leuven, heeft in een eerste financieringsronde vijftien miljoen euro opgehaald. De startup steekt het geld in onder meer de ontwikkeling van de volgende generatie 60 GHz draadloze...
CDA eist ingrijpen op 'monopolie' Chipsoft [Computable]
Het CDA eist dat minister Ernst Kuipers (VWS) ingrijpt op de markt van ziekenhuissoftware. Kamerlid namens die partij, Joba van den Berg, stelt dat het ongezond is dat één partij - het Amsterdamse softwarebedrijf Chipsoft -...
Van Teijlingen trekt zich terug uit top SoftwareOne [Computable]
Michel van Teijlingen stopt om gezondheidsredenen als de baas van SoftwareOne in de Benelux. Sinds vorig jaar oktober was hij daar Federation Lead. Ook geeft hij zijn functie van algemeen directeur bij SoftwareOne Nederland op.
NVIDIA Make Shock Open-Source Announcement [OMG! Ubuntu!]
Official open source NVIDIA graphics drivers get a step closer to reality as NVIDIA announced the first release of open GPU modules its recent hardware.
This post, NVIDIA Make Shock Open-Source Announcement is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.
Ubuntu Preview on WSL Brings Ubuntu Daily Builds to Windows [OMG! Ubuntu!]
It's now much easier to try Ubuntu daily builds on Windows 10 and 11 using the Ubuntu Preview on WSL app recently added to the Microsoft Store.
This post, Ubuntu Preview on WSL Brings Ubuntu Daily Builds to Windows is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.
How to Disable Animations in Ubuntu 22.04 LTS [OMG! Ubuntu!]
It's easy to disable animations in Ubuntu 22.04. You don't need extra apps or commands; a setting to turn off UI effects is now present in the Settings app.
This post, How to Disable Animations in Ubuntu 22.04 LTS is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.
How to Use the VI Editor in Linux [Linux Journal - The Original Magazine of the Linux Community]
If you’re searching for info related to the VI editor, this article is for you. So, what’s VI editor? VI is a text editor that’s screen-oriented and the most popular in the Linux world. The reasons for its popularity are 1) availability for almost all Linux distros, 2) VI works the same throughout multiple platforms, and 3) its user-friendly features. Currently, VI Improved or VIM is the most used advanced counterpart of VI.
To work on the VI text editor, you have to know how to use the VI editor in Linux. Let’s find it out from this article.
VI text editor works in two modes, 1) Command mode and 2) Insert mode. In the command mode, users’ commands are taken to take action on a file. The VI editor, usually, starts in the command mode. Here, the words typed act as commands. So, you should be in the command mode while passing a command.
On the other hand, in the Insert mode, file editing is done. Here, the text is inserted into the file. So, you need to be in the insert mode to enter text. Just type ‘i
’ to be in the insert mode. Use the Esc key to switch from insert mode to command mode in the editor. If you don’t know your current mode, press the Esc key twice. This takes you to the command mode.
First, you need to launch the VI editor to begin working on it. To launch the editor, open your Linux terminal and then type:
vi or
And if you mention an existing file, VI would open it to edit. Alternatively, you’re free to create a completely new file.
You need to be in the command mode to run editing commands in the VI editor. VI is case-sensitive. Hence, make sure you use the commands in the correct letter case. Also, make sure you type the right command to avoid undesired changes. Below are some of the essential commands to use in VI.
i
– Inserts at cursor (gets into the insert mode)
a
– Writes after the cursor (gets into the insert mode)
A
– Writes at the ending of a line (gets into the insert mode)
o
– Opens a new line (gets into the insert mode)
ESC
– Terminates the insert mode
u
– Undo the last change
U
– Undo all changes of the entire line
D
– Deletes the content of a line after the cursor
R
– Overwrites characters from the cursor onwards
r
– Replaces a character
s
– Substitutes one character under the cursor and continue to insert
S
– Substitutes a full line and start inserting at the beginning of a line
Snellere adoptie van edge computing-architecturen [Computable]
Red Hat introduceert nieuwe mogelijkheden voor ‘edge computing’ binnen zijn open hybride cloud-portfolio. Deze leverancier van open source-oplossingen zegt de adoptie van edge computing-architecturen te kunnen versnellen. Dit wordt mogelijk door te zorgen voor minder complexiteit,...
Cegeka formeert aparte cybersecurity-tak [Computable]
Automatiseerder Cegeka bundelt al zijn activiteiten rond monitoren, detecteren en reageren op cybersecurity-incidenten in de nieuwe divisie Cyber Security Operations & Response Center, afgekort tot C-SOR²C. Dat maakt het bedrijf bekend tijdens het vandaag gestarte Cybersec...
Cloud-oplossingen van SAS in opmars [Computable]
Analytics-specialist SAS haalde vorig jaar negentien procent meer omzet uit cloud-oplossingen. Het Amerikaanse bedrijf breidt zijn branche-specifieke portfolio uit met software voor levenswetenschap, de energiesector en marketing-technologie.
Exact Globe ondergaat volledige make-over [Computable]
Exact brengt een nieuwe versie uit van Exact Globe, het programma waar 13.000 ondernemingen hun bedrijfsprocessen mee aansturen. Bij Exact Globe+ is de software onder de motorkap volledig vernieuwd en voorzien van de laatste technologie.
Salesforce versimpelt projectadministratie Radboudumc [Computable]
Salesforce en het Utrechtse Growtivity gaan het Radboud Universitair Medisch Centrum (Radboudumc) in Nijmegen ondersteunen bij de administratie en procesondersteuning van wetenschappelijk onderzoek. Het is de bedoeling deze processen te vereenvoudigen en versnellen.
Digitale transitie gemeenten en provincies in de knel [Computable]
Investeringen in de digitale en groene transitie komen in de knel door een geschil over de manier waarop Nederland de gereserveerde gelden uit het Europese Corona Herstelfonds wil aanwenden. De VNG (gemeenten) en het IPO (provincies)...
Herke ICT heet voortaan TreeICT [Computable]
Bouwautomatiseerder Herke ICT gaat voortaan als TreeICT door het leven. Het bedrijf uit Alkmaar, in 1998 opgericht door Herke Dekker, kwam begin vorig jaar in handen van investeerder Nedvest. Die bracht het onder bij op de...
KDE Connect is Now Available for iPhone & iPad [OMG! Ubuntu!]
A KDE Connect app is available on the Apple App Store. It lets iPhone & iPad users benefit from integration between their device(s) and the Linux desktop.
This post, KDE Connect is Now Available for iPhone & iPad is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.
Deb-Get is ‘Apt-Get’ for 3rd-Party Ubuntu Software [OMG! Ubuntu!]
All fo your favourite extra-repo Ubuntu apps are now a single command away. Deb-Get is a tool that installs deb files from website from the command line.
This post, Deb-Get is ‘Apt-Get’ for 3rd-Party Ubuntu Software is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.
Firefox 100 is Now Available to Download 🥳 [OMG! Ubuntu!]
Mozilla Firefox 100 is available to download. The new release includes new site theme options, subtitles in picture-in-picture mode, and Linux bug fixes.
This post, Firefox 100 is Now Available to Download 🥳 is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.
Open Source Video Editor Kdenlive Gains 10-Bit Color Support [OMG! Ubuntu!]
A new version of Kdenlive, a Qt-based open source video editor, is available to download. We recap Kdenlive 22.04's new features and UI tweaks.
This post, Open Source Video Editor Kdenlive Gains 10-Bit Color Support is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.
Rhythmbox 3.4.5 Improves Its Support for Podcasts [OMG! Ubuntu!]
Rhythmbox 3.4.5 is available to download. It includes big improvements to podcast downloading, playback, and management plus a raft of smaller tweaks.
This post, Rhythmbox 3.4.5 Improves Its Support for Podcasts is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.
LibreOffice 7.3.3 Community available for download [Press Releases Archives - The Document Foundation Blog]
LibreOffice is now available for download also on SourceForge
Berlin, May 4, 2022 – LibreOffice 7.3.3 Community, the third minor release of the LibreOffice 7.3 family, targeted at technology enthusiasts and power users, is available for download from https://www.libreoffice.org/download/. In addition to the LibreOffice website, starting from tomorrow it will be possible to download LibreOffice from SourceForge: https://sourceforge.net/projects/libreoffice/files/libreoffice/stable/.
Logan Abbott, SourceForge’s President and COO, says: “We’re happy to add to our open source download library an amazing open source office suite such as LibreOffice, which is without a doubt one of the best office suites ever and one which I personally use often. I highly recommend it to anyone that needs a powerful FOSS office suite.”
The LibreOffice 7.3 family offers the highest level of compatibility in the office suite market segment, starting with native support for the OpenDocument Format (ODF) – beating proprietary formats in the areas of security and robustness – to superior support for DOCX, XLSX and PPTX files.
Microsoft files are still based on the proprietary format deprecated by ISO in 2008, which is artificially complex, and not on the ISO approved standard. This lack of respect for the ISO standard format may create issues to LibreOffice, and is a huge obstacle for transparent interoperability.
LibreOffice for enterprise deployments
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners, with long-term support options, professional assistance, custom features and Service Level Agreements: https://www.libreoffice.org/download/libreoffice-in-business/.
LibreOffice Community and the LibreOffice Enterprise family of products are based on the LibreOffice Technology platform, the result of years of development efforts with the objective of providing a state of the art office suite not only for the desktop but also for mobile and the cloud.
Products based on LibreOffice Technology are available for major desktop operating systems (Windows, macOS, Linux and Chrome OS), mobile platforms (Android and iOS) and the cloud. They may have a different name, according to each company brand strategy, but they share the same LibreOffice unique advantages, robustness and flexibility.
Availability of LibreOffice 7.3.3 Community
LibreOffice 7.3.3 Community represents the bleeding edge in term of features for open source office suites. For users whose main objective is personal productivity and therefore prefer a release that has undergone more testing and bug fixing over the new features, The Document Foundation provides LibreOffice 7.2.6 and soon LibreOffice 7.2.7.
LibreOffice 7.3.3 change log pages are available on TDF’s wiki: https://wiki.documentfoundation.org/Releases/7.3.3/RC1 (changed in RC1) and https://wiki.documentfoundation.org/Releases/7.3.3/RC2 (changed in RC2). Over 80 bugs and regressions have been solved.
LibreOffice Technology based products for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/, while for App Stores and ChromeOS are listed here: https://www.libreoffice.org/download/libreoffice-from-microsoft-and-mac-app-stores/
LibreOffice individual users are assisted by a global community of volunteers: https://www.libreoffice.org/get-help/community-support/. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help the project to make all of these resources available.
LibreOffice users are invited to join the community at https://ask.libreoffice.org, where they can get and provide user-to-user support. People willing to contribute their time and professional skills to the project can visit the dedicated website at https://whatcanidoforlibreoffice.org.
LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools at https://www.libreoffice.org/donate.
LibreOffice 7.3.3 is built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org.
Primer to Container Security [Linux Journal - The Original Magazine of the Linux Community]
Containers are considered to be a standard way of deploying these microservices to the cloud. Containers are better than virtual machines in almost all ways except security, which may be the main barrier to their widespread adoption.
This article will provide a better understanding of container security and available techniques to secure them.
A Linux container can be defined as a process or a set of processes running in the userspace that is/are isolated from the rest of the system by different kernel tools.
Containers are great alternatives to virtual machines (VMs). Even though containers and virtual machines provide the same isolation benefits, they differ in the way that containers provide operating system virtualization instead of hardware. This makes them lightweight, faster to start, and consumes less memory.
As multiple containers share the same kernel, the solution is less secure than the VMs, where they have their copies of OS, libraries, dedicated resources, and applications. That makes VM excellently secure but because of their high storage size and reduced performance, it creates a limitation on the total number of VMs which can be run simultaneously on a server. Further VMs take a lot of time to boot.
The introduction of microservice architecture has changed the way of developing software. Microservices allow the development of software in small self-contained independent services. This makes the application easier to scale and provides agility.
If a part of the software needs to be rewritten it can easily be done by changing only that part of the code without interrupting any other service, which wasn't possible with the monolithic kernel.
Namespaces ensure the isolation of resources for processes running in a container to that of others. They partition the kernel resources for different processes. One set of processes in a separate namespace will see one set of resources while another set of processes will see another. Processes in different see different process IDs, hostnames, user IDs, file names, names for network access, and some interprocess communication. Hence, each file system namespace has its private mount table and root directory.
Scrolling Up and Down in the Linux Terminal [Linux Journal - The Original Magazine of the Linux Community]
Are you looking for the technique of scrolling through your Linux terminal? Brace yourself. This article is written for you. Today you’ll learn how to scroll up and down in the Linux terminal. So, let’s begin.
But before going ahead and learning about up and down scrolling in the terminal, let’s find out what’s the importance of scrolling in the Linux terminal. When you have a lot of output printed on your terminal screen, it becomes helpful to make your Linux terminal behave in a particular manner. You can clear the terminal at any time. This may make your work easier and quicker to complete. But what if you’re troubleshooting an issue and you need a previously entered command, then scrolling up or down comes to the rescue.
Various shortcuts and commands allow you to perform scrolling in the Linux terminal whenever you want. So, for easy navigation in your terminal using the keyboard, read on.
In the Linux terminal, you can scroll up by page using the Shift + PageUp shortcut. And to scroll down in the terminal, use Shift + PageDown. To go up or down in the terminal by line, use Ctrl + Shift + Up or Ctrl + Shift + Down respectively.
Following are some key combinations that are useful in scrolling through the Linux terminal.
Ctrl+End: This allows you to scroll down to your cursor.
Ctrl+Page Up: This key combination lets you scroll up by one page.
Ctrl+Page Dn: This lets you scroll down by one page.
Ctrl+Line Up: To scroll up by one line, use this key combination.
The more command allows you to see the text files within the command prompt. For bigger files (for example, log files), it shows one screen at one time. The more command is also used to scroll up and down within the page. To scroll up the display one line at a time, press the Enter key. To scroll a screenful at a time, use Spacebar. To do backward scrolling, press ‘b’.
To disable the scrollbar, follow the steps given in this section. First, on the window, press the Menu button residing in the top-right corner. Then select Preferences. From the Profiles section in the sidebar, select the profile you’re currently using. Then select the Scrolling option. Finally, uncheck the Show scrollbar to disable the scrolling feature in the terminal. Your preference will be saved immediately.
Self-Hosted Static Homepages: Dashy Vs. Homer [Linux Journal - The Original Magazine of the Linux Community]
Authors: Brandon Hopkins, Suparna Ganguly
Self-hosted homepages are a great way to manage your home lab or cloud services. If you’re anything like me chances are, you have a variety of docker containers, media servers, and NAS portals all over the place. Using simple bookmarks to keep track of everything often isn’t enough. With a self-hosted homepage, you can view everything you need from anywhere. And you can add integrations and other features to help you better manage everything you need to.
Dashy and Homer are two separate static homepage applications. These are used in home labs and on the cloud to help people organize and manage their services, docker containers, and web bookmarks. This article will overview exactly what these self-hosted homepages have to offer.
Dashy is a 100% free and open-source, self-hosted, highly customizable homepage app for your server that has a strong focus on privacy. It offers an easy-to-use visual editor, widgets, status checking, themes, and lots more features. Below are the features that you can avail yourself of with Dashy.
Live Demo: https://demo.dashy.to/
Customize
You can customize your Dashy as how you want to fit in your use case. From the UI, choose from different layouts, show/hide components, item sizes, switch themes, and a lot more. You can customize each area of your dashboard. There are config options available for custom HTML header, footer, title, navbar links, etc. If you don’t need something, just hide it!
Dashy offers multiple color themes having a UI color editor and support towards custom CSS. Since all of the properties use CSS variables, it is quite easy to override. In addition to themes, you can get a host of icon options, such as Font-Awesome, home lab icons, Material Design Icons, normal images, emojis, auto-fetching favicons, etc.
Integrations
GIMP in a Pinch: Life after Desktop [Linux Journal - The Original Magazine of the Linux Community]
So my Dell XPS 13 DE laptop running Ubuntu died on me today. Let’s just say I probably should not have attempted to be efficient and take a bath and work at the same time!
Unfortunately, as life always seems to be, you always need something at a time that you don’t have it and that is the case today. I have some pictures that I need to edit for a website, and I only know and use GIMP. I took a look at my PC inventory at home, and I had two options:
My roommate was using his computer, so it really only left me with one option, the chromebook. I also did not have a desire to learn another OS today as I have done enough distro hopping in the last few months. I charged and booted up the chromebook and started to figure out how I could get GIMP onto it. Interestingly enough, there are not many clear cut options to running GIMP on an Android device. There was an option to run a Linux developer environment on the chromebook, but it required 10GB of space which I didn’t have. Therefore, option two was to find an app on the Google Play Store.
Typing GIMP brought me to an app called XGimp Image Editor from DMobileAndroid, and I installed and loaded it with an image to only find this:
This definitely is nothing like GIMP and appeared to be very limited in functionality anyway. I could see why it had garnered a 1.4 star rating as it definitely is not what someone would expect when they are looking for something similar to GIMP.
So I took a look at the other options, and there was another app called GIMP from Userland Technologies. It does cost $1.99, but it was a one-time charge and seemed to be the only other option on the Play Store. Reviewing the screenshots and the description of the application seemed to suggest that this would be the actual GIMP app that I was using on my desktop so I went ahead and downloaded it. Installation was relatively quick, and I started running it and to my surprise, here is what I saw:
It appears that the application basically is a Linux desktop build that automatically launches the desktop version of GIMP. Therefore, it really is GIMP. I loaded up an image which was also relatively easy to do as it seamlessly connected to my folders on my chromebook.
Geek Guide: Purpose-Built Linux for Embedded Solutions [Linux Journal - The Original Magazine of the Linux Community]
The explosive growth of the Internet of Things (IoT) is just one of several trends that is fueling the demand for intelligent devices at the edge. Increasingly, embedded devices use Linux to leverage libraries and code as well as Linux OS expertise to deliver functionality faster, simplify ongoing maintenance, and provide the most flexibility and performance for embedded device developers.
This e-book looks at the various approaches to providing both Linux and a build environment for embedded devices and offers best practices on how organizations can accelerate development while reducing overall project cost throughout the entire device lifecycle.
How to Install and Uninstall KernelCare [Linux Journal - The Original Magazine of the Linux Community]
In my previous article, I described what KernelCare is. In this article, I’m going to tell you how to install, uninstall, clear the KernelCare cache, and other important information regarding KernelCare. In case you’re yet to know about the product, here’s a short recap. KernelCare provides automated security updates to the Linux kernel. It offers patches and error fixes for various Linux kernels.
So, if you are looking for anything similar, you have landed upon the right page. Let’s begin without further ado.
Before installing KernelCare in your Linux system, ensure that you have either of these operating systems as given below.
64-bit RHEL/CentOS 5.x, 6.x, 7.x
CloudLinux 5.x, 6.x
Virtuozzo/PCS/OpenVZ 2.6.32
Debian 6.x, 7.x
Ubuntu 14.04
Note: In case you have KernelCare installed on your machine, it might be useful to know the current KernelCare version before installing KernelCare next time. To know the current version run the below-given command as root:
/usr/bin/kcarectl –uname
To check if your current kernel is compatible with KernelCare, you need to use the following code.
curl -s -L https://kernelcare.com/checker | python
Run the following command to install KernelCare.
curl -s -L https://kernelcare.com/installer | bash
If you use an IP-based license, you don’t need to do anything more. However, if you use a key-based license, run the following command.
/usr/bin/kcarectl --register KEY
KEY is a registration key code string. It’s given to you when you sign up to purchase or to go through a trial of KernelCare. Let’s see an example.
[root@unixcop:~]/usr/bin/kcarectl --register XXXXXXXXXXX
Server Registered
The above example shows a registration key code string.
If you experience a “Key limit reached” error message, then you need to first unregister the server after the trial ends. To do the same type:
kcarectl --unregister
For checking if the patches have been applied successfully or not, use the command as given below.
/usr/bin/kcarectl --info
Now the software will check for new patches automatically every 4 hours.
If you want to run updates manually, run:
LibreOffice 7.3.2 Community available for download [Press Releases Archives - The Document Foundation Blog]
Berlin, March 31, 2022 – LibreOffice 7.3.2 Community, the second minor release of the LibreOffice 7.3 family, targeted at technology enthusiasts and power users, is available for download from https://www.libreoffice.org/download/.
The LibreOffice 7.3 family offers the highest level of compatibility in the office suite market segment, starting with native support for the OpenDocument Format (ODF) – beating proprietary formats in the areas of security and robustness – to superior support for DOCX, XLSX and PPTX files.
Microsoft files are still based on the proprietary format deprecated by ISO in 2008, which is artificially complex, and not on the ISO approved standard. This lack of respect for the ISO standard format may create issues to LibreOffice, and is a huge obstacle for transparent interoperability.
LibreOffice for enterprise deployments
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners, with long-term support options, professional assistance, custom features and Service Level Agreements: https://www.libreoffice.org/download/libreoffice-in-business/.
LibreOffice Community and the LibreOffice Enterprise family of products are based on the LibreOffice Technology platform, the result of years of development efforts with the objective of providing a state of the art office suite not only for the desktop but also for mobile and the cloud.
Products based on LibreOffice Technology are available for major desktop operating systems (Windows, macOS, Linux and Chrome OS), mobile platforms (Android and iOS) and the cloud. They may have a different name, according to each company brand strategy, but they share the same LibreOffice unique advantages, robustness and flexibility.
Availability of LibreOffice 7.3.2 Community
LibreOffice 7.3.2 Community represents the bleeding edge in term of features for open source office suites. For users whose main objective is personal productivity and therefore prefer a release that has undergone more testing and bug fixing over the new features, The Document Foundation provides LibreOffice 7.2.6.
LibreOffice 7.3.2 change log pages are available on TDF’s wiki: https://wiki.documentfoundation.org/Releases/7.3.2/RC1 (changed in RC1) and https://wiki.documentfoundation.org/Releases/7.3.2/RC2 (changed in RC2). Over 80 bugs and regressions have been solved.
LibreOffice Technology based products for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/, while for App Stores and ChromeOS are listed here: https://www.libreoffice.org/download/libreoffice-from-microsoft-and-mac-app-stores/
LibreOffice individual users are assisted by a global community of volunteers: https://www.libreoffice.org/get-help/community-support/. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help the project to make all of these resources available.
LibreOffice users are invited to join the community at https://ask.libreoffice.org, where they can get and provide user-to-user support. People willing to contribute their time and professional skills to the project can visit the dedicated website at https://whatcanidoforlibreoffice.org.
LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools at https://www.libreoffice.org/donate.
LibreOffice 7.3.2 is built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org.
What’s KernelCare? [Linux Journal - The Original Magazine of the Linux Community]
This article explains all that you need to know about KernelCare. But before studying about KernelCare, let’s do a quick recap of the Linux kernel. It’ll help you understand KernelCare better. The Linux kernel is the core part of Linux OS. It resides in memory and prompts the CPU what to do.
Now let’s begin with today’s topic which is KernelCare. And if you’re a system administrator this article is going to present valuable information for you.
So, what’s KernelCare? KernelCare is a patching service that offers live security updates for Linux kernels, shared libraries, and embedded devices. It patches security vulnerabilities inside the Linux kernel without creating service interruptions or any downtime. Once you install KernelCare on the server, security updates automatically get applied every 4 hours on your server. It dismisses the need for rebooting your server after making updates.
It is a commercial product and is licensed under GNU GPL version 2. Cloud Linux, Inc developed this product. The first beta version of KernelCare was released in March 2014 and its commercial launch was in May 2014. Since then they have added various useful integrations for automation tools, vulnerability scanners, and others.
Operating systems supported by KernelCare include CentOS/RHEL 5, 6, 7; Cloud Linux 5, 6; OpenVZ, PCS, Virtuozzo, Debian 6, 7; and Ubuntu 14.04.
Are you wondering if KernelCare is important for you or not? Find out here. By installing the latest kernel security patches, you are able to minimize potential risks. When you try to update the Linux kernel manually, it may take hours. Apart from the server downtime, it can be a stressful job for the system admins and also for the clients.
Once the kernel updates are applied, the server needs a reboot. This is usually done during off-peak work hours. And this causes some additional stress. However, ignoring server reboots can cause a whole lot of security issues. It’s seen that, even after rebooting, the server experiences issues and doesn’t easily come back up. Fixing such issues is a trouble for the system admins. Often the system admin needs to roll back all the applied updates to get the server up quickly.
With KernelCare, you can avoid such issues.
KernelCare eliminates non-compliance and service interruptions caused by system reboots. KernelCare agent resides on your server. It periodically checks for new updates. In case it finds any, the agent downloads those and applies them to the running kernel. A KernelCare patch can be defined as a piece of code that’s used to substitute buggy code in the kernel.
Getting Started with Docker Semi-Self-Hosting on Linode [Linux Journal - The Original Magazine of the Linux Community]
With the evolution of technology, we find ourselves needing to be even more vigilant with our online security every day. Our browsing and shopping behaviors are also being continuously tracked online via tracking cookies being dropped on our browsers that we allow by clicking the “I Accept” button next to deliberately long agreements on websites before we can get the full benefit of said site.
Watch this article:
Additionally, hackers are always looking for a target and it's common for even big companies to have their servers compromised in any number of ways and have sensitive data leaked, often to the highest bidder.
These are just some of the reasons that I started looking into self-hosting as much of my own data as I could.
Because not everyone has the option to self-host on their own, private hardware, whether it's for lack of hardware, or because their ISP makes it difficult or impossible to do so, I want to show you what I believe to be the next best step, and that's a semi-self-hosted solution on Linode.
Let's jump right in!
First things first, you’ll need a Docker server set up. Linode has made that process very simple and you can set one up for just a few bucks a month and can add a private IP address (for free) and backups for just a couple bucks more per month.
Get logged into your Linode account click on "Create Linode".
Don't have a Linode account? Get $100 in credit clicking here
On the "Create" page, click on the "Marketplace" tab and scroll down to the "Docker" option. Click it.
With Docker selected, scroll down and close the "Advanced Options" as we won't be using them.
Below that, we'll select the most recent version of Debian (version 10 at the time of writing).
In order to get the the lowest latency for your setup, select a Region nearest you.
When we get to the "Linode Plan" area, find an option that fits your budget. You can always start with a small plan and upgrade later as your needs grow.
Next, enter a "Linode Label" as an identifier for you. You can enter tags if you want.
Enter a Root Password and import an SSH key if you have one. If you don't that's fine, you don't need to use an SSH key. If you'd like to generate one and use it, you can find more information about how to do so here "Creating an SSH Key Pair and Configuring Public Key Authentication on a Server").
5 Lesser-Known Open Source Web Browsers for Linux in 2022 [Linux Journal - The Original Magazine of the Linux Community]
If you’re in search of open-source web browsers that are lesser-known to you, this article is written for you. This article takes you through 5 amazing open-source web browsers that are readily available for your Linux system. Let’s find out the options to choose from in 2022.
Konqueror web browser is developed by KDE. Konqueror is one of the lesser-known open-source web browsers that’s been built on top of KHTML. Konqueror has been built for any kind of file previewing and file management. Konqueror makes use of KHTML or KDEWebKit rendering engines. File management is done on ftp and sftp servers using Dolphin’s features including service menus, version-control, and the basic UI. It has a full-featured FTP client. So, you can split views to show remote and local folders and previews on the same window.
For previewing files, the Konqueror browser has in-built embedded applications, such as Gwenview for pictures, Okular and Calligra used for documents, KTextEditor for text-files, etc. You can use its various plugins, such as Service-menus, KPart for AdBlocking, KIO to access files, and others.
The international KDE community does the maintenance of the Konqueror browser.
GNOME Web comes next in this list of free and open-source web browsers made for Linux. It’s a clean browser that features first-class GNOME and Pantheon desktop integrations. It also includes a built-in adblocker and Intelligent Tracking Prevention. It primarily follows GNOME’s design philosophy. So, there’s no wasted space or useless widgets.
Despite being a GNOME component, the GNOME Web browser is independent of any GNOME components. The GNOME Web is built on top of the WebKit rendering engine. You can use Flatpak to install Epiphany because Flatpak is the most reliable application distribution mechanism used for Linux. Elementary OS and Bodhi Linux use GNOME Web as their default web browser. Did you know GNOME Web browser’s codename is Epiphany? Why Epiphany? Well, this means a sudden perception or manifestation of the meaning of something. Let’s move on towards our next open-source browser.
Announcement of LibreOffice 7.2.6 Community [Press Releases Archives - The Document Foundation Blog]
Berlin, March 10, 2022 – LibreOffice 7.2.6 Community, the sixth minor release of the LibreOffice 7.2 family, targeted at desktop productivity, is available from from the download page.
End user support is provided by volunteers via email and online resources: community support. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help us to make all of these resources available.
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners, with long-term support options, professional assistance, custom features and Service Level Agreements: LibreOffice Business.
LibreOffice 7.2.6’s changelog pages are available on TDF’s wiki: RC1 and RC2.
LibreOffice Community and the LibreOffice Enterprise family of products are based on the LibreOffice Technology platform, the result of years of development efforts with the objective of providing a state of the art office suite not only for the desktop but also for mobile and the cloud.
LibreOffice Technology based products for Android and iOS are listed here, while for App Stores and ChromeOS are listed on this page.
LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools on our donate page.
LibreOffice 7.2.6 is built with document conversion libraries from the: Document Liberation Project.
LibreOffice 7.3.1 Community available for download [Press Releases Archives - The Document Foundation Blog]
Berlin, March 3, 2022 – LibreOffice 7.3.1 Community, the first minor release of the LibreOffice 7.3 family, targeted at technology enthusiasts and power users, is available for download from https://www.libreoffice.org/download/. This version provides a solution to several LibreOffice 7.3 bugs, including the Auto Calculate regression on Calc, the crashes running Calc when lacking AVX instructions and the crashes related to the Skia graphic engine on macOS.
The LibreOffice 7.3 family offers the highest level of compatibility in the office suite market segment, starting with native support for the OpenDocument Format (ODF) – beating proprietary formats in the areas of security and robustness – to superior support for DOCX, XLSX and PPTX files.
Microsoft files are still based on the proprietary format deprecated by ISO in 2008, which is artificially complex, and not on the ISO approved standard. This lack of respect for the ISO standard format may create issues to LibreOffice, and is a huge obstacle for transparent interoperability.
LibreOffice for enterprise deployments
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners, with long-term support options, professional assistance, custom features and Service Level Agreements: https://www.libreoffice.org/download/libreoffice-in-business/.
LibreOffice Community and the LibreOffice Enterprise family of products are based on the LibreOffice Technology platform, the result of years of development efforts with the objective of providing a state of the art office suite not only for the desktop but also for mobile and the cloud.
Products based on LibreOffice Technology are available for major desktop operating systems (Windows, macOS, Linux and Chrome OS), mobile platforms (Android and iOS) and the cloud. They may have a different name, according to each company brand strategy, but they share the same LibreOffice unique advantages, robustness and flexibility.
Availability of LibreOffice 7.3.1 Community
LibreOffice 7.3.1 Community represents the bleeding edge in term of features for open source office suites. For users whose main objective is personal productivity and therefore prefer a release that has undergone more testing and bug fixing over the new features, The Document Foundation provides LibreOffice 7.2.5.
LibreOffice 7.3.1 change log pages are available on TDF’s wiki: https://wiki.documentfoundation.org/Releases/7.3.1/RC1 (changed in RC1), https://wiki.documentfoundation.org/Releases/7.3.1/RC2 (changed in RC2) and https://wiki.documentfoundation.org/Releases/7.3.1/RC3 (changed in RC3).
LibreOffice Technology based products for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/, while for App Stores and ChromeOS are listed here: https://www.libreoffice.org/download/libreoffice-from-microsoft-and-mac-app-stores/
LibreOffice individual users are assisted by a global community of volunteers: https://www.libreoffice.org/get-help/community-support/. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help the project to make all of these resources available.
LibreOffice users are invited to join the community at https://ask.libreoffice.org, where they can get and provide user-to-user support. People willing to contribute their time and professional skills to the project can visit the dedicated website at https://whatcanidoforlibreoffice.org.
LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools at https://www.libreoffice.org/donate.
LibreOffice 7.3.1 is built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org.
LibreOffice 7.3 Community is better than ever at interoperability [Press Releases Archives - The Document Foundation Blog]
In addition to the majority of code commits being focused on interoperability with Microsoft’s proprietary file formats, there is a wealth of new features targeted at users migrating from Office, to simplify the transition
Berlin, February 2, 2022 – LibreOffice 7.3 Community, the new major release of the volunteer-supported free office suite for desktop productivity, is available from https://www.libreoffice.org/download. Based on the LibreOffice Technology platform for personal productivity on desktop, mobile and cloud, it provides a large number of improvements targeted at users migrating from Microsoft Office to LibreOffice, or exchanging documents between the two office suites.
There are three different kinds of interoperability improvements:
In addition, LibreOffice’s Help has also been improved to support all users, with a particular attention for those switching from Microsoft Office: search results – which are now using FlexSearch instead of Fuzzysort for indexing – are focused on the user’s current module, while Help pages for Calc Functions have been reviewed for accuracy and completeness and linked to Calc Function wiki pages, while Help pages for the ScriptForge scripting library have been updated.
ScriptForge libraries, which make it easier to develop macros, have also been extended with various features: the addition of a new Chart service, to define charts stored in Calc sheets; a new PopupMenu service, to describe the menu to be displayed after a mouse event; an extensive option for Printer Management, with a list of fonts and printers; and a feature to export documents to PDF with full management of PDF options. The whole set of services is available with identical syntax and behavior for Python and Basic.
LibreOffice offers the highest level of compatibility in the office suite market segment, starting with native support for the OpenDocument Format (ODF) – beating proprietary formats in the areas of security and robustness – to superior support for DOCX, XLSX and PPTX files. In addition, LibreOffice provides filters for a large number of legacy document formats, to return ownership and control to users.
Microsoft files are still based on the proprietary format deprecated by ISO in 2008, and not on the ISO approved standard, so they hide a large amount of artificial complexity. This causes handling issues with LibreOffice, which defaults to a true open standard format (the OpenDocument Format).
LibreOffice 7.3 is available natively for Apple Silicon, a series of processors designed by Apple and based on the ARM architecture. The option has been added to the default ones available on the download page.
A video summarizing the top new features in LibreOffice 7.3 Community is available on YouTube: https://www.youtube.com/watch?v=Raw0LIxyoRU and PeerTube: https://peertube.opencloud.lu/w/iTavJYSS9YYvnW43anFLeC.
A description of all new features is available in the Release Notes [1]
Contributors to LibreOffice 7.3 Community
LibreOffice 7.3 Community’s new features have been developed by 147 contributors: 69% of code commits are from the 49 developers employed by three companies sitting in TDF’s Advisory Board – Collabora, Red Hat and allotropia – or other organizations (including The Document Foundation), and 31% are from 98 individual volunteers.
In addition, 641 volunteers have provided localizations in 155 languages. LibreOffice 7.3 Community is released in 120 different language versions, more than any other free or proprietary software, and as such can be used in the native language (L1) by over 5.4 billion people worldwide. In addition, over 2.3 billion people speak one of those 120 languages as their second language (L2).
LibreOffice for Enterprises
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a large number of dedicated value-added features. These include long-term support options, professional assistance, personalized developments and other benefits such as SLA (Service Level Agreements): https://www.libreoffice.org/download/libreoffice-in-business/.
Despite this recommendation, an increasing number of enterprises are using the version supported by volunteers, instead of the version optimized for their needs and supported by the different ecosystem companies.
Over time, this represents a problem for the sustainability of the LibreOffice project, because it slows down the evolution of the project. In fact, every line of code developed by ecosystem companies for their enterprise customers is shared with the community on the master code repository, and improves the LibreOffice Technology platform.
Products based on LibreOffice Technology are available for major desktop operating systems (Windows, macOS, Linux and Chrome OS), for mobile platforms (Android and iOS), and for the cloud. Slowing down the development of the platform is hurting users at every level, and the LibreOffice project may fall short of its expectations and possibilities.
Migrations to LibreOffice
The Document Foundation has developed a Migration Protocol to support enterprises moving from proprietary office suites to LibreOffice, which is based on the deployment of an LTS version from the LibreOffice Enterprise family, plus migration consultancy and training sourced from certified professionals who offer value-added solutions in line with proprietary offerings. Reference: https://www.libreoffice.org/get-help/professional-support/.
In fact, LibreOffice – thanks to its mature codebase, rich feature set, strong support for open standards, excellent compatibility and LTS options from certified partners – is the ideal solution for businesses that want to regain control of their data and free themselves from vendor lock-in.
Availability of LibreOffice 7.3 Community
LibreOffice 7.3 Community is immediately available from the following link: https://www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 and Apple macOS 10.12.
LibreOffice Technology-based products for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/, while for App Stores and ChromeOS are listed here: https://www.libreoffice.org/download/libreoffice-from-microsoft-and-mac-app-stores/
For users whose main objective is personal productivity, and therefore prefer a release that has undergone more testing and bug fixing over the new features, The Document Foundation maintains the LibreOffice 7.2 family, which includes some months of back-ported fixes. The current version is LibreOffice 7.2.5.
The Document Foundation does not provide technical support for users, although they can get it from volunteers on user mailing lists and the Ask LibreOffice website: https://ask.libreoffice.org
LibreOffice users, free software advocates and community members can support The Document Foundation with a donation at https://www.libreoffice.org/donate.
LibreOffice 7.3 is built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org
[1] Release Notes: https://wiki.documentfoundation.org/ReleaseNotes/7.3
Press Kit
Download link: https://nextcloud.documentfoundation.org/s/MnZEgpr86TzwBJi
LibreOffice 7.2.5 is now available [Press Releases Archives - The Document Foundation Blog]
Berlin, January 6, 2022 – The Document Foundation announces LibreOffice 7.2.5 Community, the fifth minor release of the LibreOffice 7.2 family, which is available on the download page.
This version includes 90 bug fixes and improvements to document compatibility. The changelogs provide details of the fixes: changes in RC1 and changes in RC2.
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners, with long-term support options, professional assistance, custom features and Service Level Agreements: LibreOffice in Business.
LibreOffice Community and the LibreOffice Enterprise family of products are based on the LibreOffice Technology platform, the result of years of development efforts with the objective of providing a state of the art office suite, not only for the desktop but also for mobile and the cloud.
LibreOffice Technology-based products for Android and iOS are listed on this page, while products for App Stores and ChromeOS are listed here.
Individual users are assisted by a global community of volunteers, via our community help pages. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help us to make all of these resources available.
LibreOffice users are invited to join the community at Ask LibreOffice, where they can get and provide user-to-user support. People willing to contribute their time and professional skills to the project can visit the dedicated website at What Can I Do For LibreOffice.
LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card, bank transfer, cryptocurrencies and other methods on this page.
LibreOffice 7.2.5 is built with document conversion libraries from the Document Liberation Project.
LibreOffice 7.2.4 Community and LibreOffice 7.1.8 Community available ahead of schedule to provide an important security fix [Press Releases Archives - The Document Foundation Blog]
Berlin, December 6, 2021 – The Document Foundation announces LibreOffice 7.2.4 Community and LibreOffice 7.1.8 Community to provide a key security fix. Releases are immediately available from https://www.libreoffice.org/download/, and all LibreOffice users are recommended to update their installation. Both new version include the fixed NSS 3.73.0 cryptographic library, to solve CVE-2021-43527 (the nss secfix is the only change compared to the previous version).
LibreOffice 7.2.4 Community is also available for Apple Silicon from this link: https://download.documentfoundation.org/libreoffice/stable/7.2.4/mac/aarch64/.
LibreOffice Community is based on the LibreOffice Technology platform, the result of years of development efforts with the objective of providing a state of the art office suite not only for the desktop but also for mobile and the cloud.
LibreOffice individual users are assisted by a global community of volunteers: https://www.libreoffice.org/get-help/community-support/. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help us to make all of these resources available.
LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools at https://www.libreoffice.org/donate.
The Document Foundation announces LibreOffice 7.2.3 Community [Press Releases Archives - The Document Foundation Blog]
Berlin, November 25, 2021 – The Document Foundation announces LibreOffice 7.2.3 Community, the third minor release of the LibreOffice 7.2 family targeted at technology enthusiasts and power users, which is available for download from https://www.libreoffice.org/download/. This version includes 112 bug fixes and improvements to document compatibility.
LibreOffice 7.2.3 Community is also available for Apple Silicon from this link: https://download.documentfoundation.org/libreoffice/stable/7.2.3/mac/aarch64/.
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners, with long-term support options, professional assistance, custom features and Service Level Agreements: https://www.libreoffice.org/download/libreoffice-in-business/.
LibreOffice Community and the LibreOffice Enterprise family of products are based on the LibreOffice Technology platform, the result of years of development efforts with the objective of providing a state of the art office suite not only for the desktop but also for mobile and the cloud.
Availability of LibreOffice 7.2.3 Community
LibreOffice 7.2.3 Community represents the bleeding edge in term of features for open source office suites. For users whose main objective is personal productivity and therefore prefer a release that has undergone more testing and bug fixing over the new features, The Document Foundation provides LibreOffice 7.1.7.
LibreOffice 7.2.3 change log pages are available on TDF’s wiki: https://wiki.documentfoundation.org/Releases/7.2.3/RC1 (changed in RC1) and https://wiki.documentfoundation.org/Releases/7.2.3/RC2 (changed in RC2).
LibreOffice Technology based products for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/, while for App Stores and ChromeOS are listed here: https://www.libreoffice.org/download/libreoffice-from-microsoft-and-mac-app-stores/
LibreOffice individual users are assisted by a global community of volunteers: https://www.libreoffice.org/get-help/community-support/. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help us to make all of these resources available.
LibreOffice users are invited to join the community at https://ask.libreoffice.org, where they can get and provide user-to-user support. People willing to contribute their time and professional skills to the project can visit the dedicated website at https://whatcanidoforlibreoffice.org.
LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools at https://www.libreoffice.org/donate.
LibreOffice 7.2.3 is built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org.
Two Super Fast App Launchers for Ubuntu 19.04 [Tech Drive-in]
During the transition period, when GNOME Shell and Unity were pretty rough around the edges and slow to respond, 3rd party app launchers were a big deal. Overtime the newer desktop environments improved and became fast, reliable and predictable, reducing the need for a alternate app launchers.
As a result, many third-party app launchers have either slowed down development or simply seized to exist. Ulauncher seems to be the only one to have bucked the trend so far. Synpase and Kupfer on the other hand, though old and not as actively developed anymore, still pack a punch. Since Kupfer is too old school, we'll only be discussing Synapse and Ulauncher here.
sudo dpkg -i ~/Downloads/ulauncher_4.3.2.r8_all.deb
sudo apt-get install -f
A Standalone Video Player for Netflix, YouTube, Twitch on Ubuntu 19.04 [Tech Drive-in]
Snap apps are a godsend. ElectronPlayer is an Electron based app available on Snapstore that doubles up as a standalone media player for video streaming services such as Netflix, YouTube, Twitch, Floatplane etc.
And it works great on Ubuntu 19.04 "disco dingo". From what we've tested, Netflix works like a charm, so does YouTube. ElectronPlayer also has a picture-in-picture mode that let it run above desktop and full screen applications.
sudo snap install electronplayer
Howto Upgrade to Ubuntu 19.04 from Ubuntu 18.10, Ubuntu 18.04 LTS [Tech Drive-in]
As most of you should know already, Ubuntu 19.04 "disco dingo" has been released. A lot of things have changed, see our comprehensive list of improvements in Ubuntu 19.04. Though it is not really necessary to make the jump, I'm sure many here would prefer to have the latest and greatest from Ubuntu. Here's how you upgrade to Ubuntu 19.04 from Ubuntu 18.10 and Ubuntu 18.04.
Upgrading to Ubuntu 19.04 from Ubuntu 18.04 LTS is tricky. There is no way you can make the jump from Ubuntu 18.04 LTS directly to Ubuntu 19.04. For that, you need to upgrade to Ubuntu 18.10 first. Pretty disappointing, I know. But when upgrading an entire OS, you can't be too careful.
And the process itself is not as tedious or time consuming à la Windows. And also unlike Windows, the upgrades are not forced upon you while you're in middle of something.
sudo do-release-upgrade -d
15 Things I Did Post Ubuntu 19.04 Installation [Tech Drive-in]
Ubuntu 19.04, codenamed "Disco Dingo", has been released (and upgrading is easier than you think). I've been on Ubuntu 19.04 since its first Alpha, and this has been a rock solid release as far I'm concerned. Changes in Ubuntu 19.04 are more evolutionary though, but availability of the latest Linux Kernel version 5.0 is significant.
sudo apt update && sudo apt dist-upgrade
sudo apt install gnome-tweaks
sudo apt install ubuntu-restricted-extras
gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize'
gsettings reset org.gnome.shell.extensions.dash-to-dock click-action
sudo apt install chrome-gnome-shell
sudo add-apt-repository ppa:system76/pop
sudo apt-get update
sudo apt install pop-icon-theme pop-gtk-theme pop-gnome-shell-theme
sudo apt install pop-wallpapers
Ubuntu 19.04 Gets Newer and Better Wallpapers [Tech Drive-in]
A "Disco Dingo" themed wallpaper was already there. But the latest update bring a bunch of new wallpapers as system defaults on Ubuntu 19.04.
LinuxBoot: A Linux Foundation Project to replace UEFI Components [Tech Drive-in]
UEFI has a pretty bad reputation among many in the Linux community. UEFI unnecessarily complicated Linux installation and distro-hopping in Windows pre-installed machines, for example. Linux Boot project by Linux Foundation aims to replace some firmware functionality like the UEFI DXE phase with Linux components.
What is UEFI?
UEFI is a standard or a specification that replaced legacy BIOS firmware, which was the industry standard for decades. Essentially, UEFI defines the software components between operating system and platform firmware.
UEFI boot has three phases: SEC, PEI and DXE. Driver eXecution Environment or DXE Phase in short: this is where UEFI system loads drivers for configured devices. LinuxBoot will replaces specific firmware functionality like the UEFI DXE phase with a Linux kernel and runtime.
LinuxBoot and the Future of System Startup
"Firmware has always had a simple purpose: to boot the OS. Achieving that has become much more difficult due to increasing complexity of both hardware and deployment. Firmware often must set up many components in the system, interface with more varieties of boot media, including high-speed storage and networking interfaces, and support advanced protocols and security features." writes Linux Foundation.
Look up Uber Time, Price Estimates on Terminal with Uber CLI [Tech Drive-in]
The worldwide phenomenon that is Uber needs no introduction. Uber is an immensely popular ride sharing, ride hailing, company that is valued in billions. Uber is so disruptive and controversial that many cities and even countries are putting up barriers to protect the interests of local taxi drivers.
Enough about Uber as a company. To those among you who regularly use Uber app for booking a cab, Uber CLI could be a useful companion.
sudo apt update
sudo apt install nodejs npm
npm install uber-cli -g
uber time 'pickup address here'Easy right? I did some testing with places and addresses I'm familiar with, where Uber cabs are fairly common. And I found the results to be fairly accurate. Do test and leave feedback. Uber CLI github page for more info.
uber price -s 'start address' -e 'end address'
UBports Installer for Ubuntu Touch is just too good! [Tech Drive-in]
Even as someone who bought into the Ubuntu Touch hype very early, I was not expecting much from UBports to be honest. But to my pleasent surprise, UBports Installer turned my 4 year old BQ Aquaris E4.5 Ubuntu Edition hardware into a slick, clean, and usable phone again.
Retro Terminal that Emulates Old CRT Display (Ubuntu 18.10, 18.04 PPA) [Tech Drive-in]
We've featured cool-retro-term before. It is a wonderful little terminal emulator app on Ubuntu (and Linux) that adorns this cool retro look of the old CRT displays.
Let the pictures speak for themselves.
sudo add-apt-repository ppa:vantuz/cool-retro-term
sudo apt update
sudo apt install cool-retro-term
Google's Stadia Cloud Gaming Service, Powered by Linux [Tech Drive-in]
Unless you live under a rock, you must've been inundated with nonstop news about Google's high-octane launch ceremony yesterday where they unveiled the much hyped game streaming platform called Stadia.
Stadia, or Project Stream as it was earlier called, is a cloud gaming service where the games themselves are hosted on Google's servers, while the visual feedback from the game is streamed to the player's device through Google Chrome. If this technology catches on, and if it works just as good as showed in the demos, Stadia could be what the future of gaming might look like.
Ubuntu 19.04 Updates - 7 Things to Know [Tech Drive-in]
Ubuntu 19.04 is scheduled to arrive in another 30 days has been released. I've been using it for the past week or so, and even as a pre-beta, the OS is pretty stable and not buggy at all. Here are a bunch of things you should know about the yet to be officially released Ubuntu 19.04.
Purism: A Linux OS is talking Convergence again [Tech Drive-in]
The hype around "convergence" just won't die it seems. We have heard it from Ubuntu a lot, KDE, even from Google and Apple in fact. But the dream of true convergence, a uniform OS experience across platforms, never really materialised. Even behemoths like Apple and Googled failed to pull it off with their Android/iOS duopoly. Purism's Debian based PureOS wants to change all that for good.
"Purism is beating the duopoly to that dream, with PureOS: we are now announcing that Purism’s PureOS is convergent, and has laid the foundation for all future applications to run on both the Librem 5 phone and Librem laptops, from the same PureOS release", announced Jeremiah Foster, the PureOS director at Purism (by duopoly, he was referring to Android/iOS platforms that dominate smartphone OS ecosystem).
"it turns out that this is really hard to do unless you have complete control of software source code and access to hardware itself. Even then, there is a catch; you need to compile software for both the phone’s CPU and the laptop CPU which are usually different architectures. This is a complex process that often reveals assumptions made in software development but it shows that to build a truly convergent device you need to design for convergence from the beginning."
Komorebi Wallpapers display Live Time & Date, Stunning Parallax Effect on Ubuntu [Tech Drive-in]
Live wallpapers are not a new thing. In fact we have had a lot of live wallpapers to choose from on Linux 10 years ago. Today? Not so much. In fact, be it GNOME or KDE, most desktops today are far less customizable than it used to be. Komorebi wallpaper manager for Ubuntu is kind of a way back machine in that sense.
sudo apt remove komorebi
Snap Install Mario Platformer on Ubuntu 18.10, Ubuntu 18.04 LTS [Tech Drive-in]
Nintendo's Mario needs no introduction. This game defined our childhoods. Now you can install and have fun with an unofficial version of the famed Mario platformer in Ubuntu 18.10 via this Snap package.
sudo snap install mari0
sudo snap connect mari0:joystick
Florida based Startup Builds Ubuntu Powered Aerial Robotics [Tech Drive-in]
Apellix is a Florida based startup that specialises in aerial robotics. They intend to create safer work environments by replacing workers with its task-specific drones to complete high-risk jobs at dangerous/elevated work sites.
Openpilot: An Opensource Alternative to Tesla Autopilot, GM Super Cruise [Tech Drive-in]
Openpilot is an opensource driving agent which at the moment can perform industry-standard functions such as Adaptive Cruise Control and Lane Keeping Assist System for a select few auto manufacturers.
Oranchelo - The icon theme to beat on Ubuntu 18.10 [Tech Drive-in]
OK, that might be an overstatement. But Oranchelo is good, really good.
sudo add-apt-repository ppa:oranchelo/oranchelo-icon-theme
sudo apt update
sudo apt install oranchelo-icon-theme
11 Things I did After Installing Ubuntu 18.10 Cosmic Cuttlefish [Tech Drive-in]
Have been using "Cosmic Cuttlefish" since its first beta. It is perhaps one of the most visually pleasing Ubuntu releases ever. But more on that later. Now let's discuss what can be done to improve the overall user-experience by diving deep into the nitty gritties of Canonical's brand new flagship OS.
sudo apt install ubuntu-restricted-extras
sudo apt install gnome-tweaks
gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize'
gsettings reset org.gnome.shell.extensions.dash-to-dock click-action
sudo add-apt-repository ppa:slgobinath/safeeyes
sudo apt update
sudo apt install safeeyes
sudo add-apt-repository ppa:system76/pop
sudo apt-get update
sudo apt install pop-icon-theme pop-gtk-theme pop-gnome-shell-theme
sudo apt install pop-wallpapers
sudo gedit /etc/default/apport
RIOT OS: A tiny Opensource OS for the 'Internet of Things' (IoT) [Tech Drive-in]
"RIOT powers the Internet of Things like Linux powers the Internet." RIOT is a small, free and opensource operating system for the memory constrained, low power wireless IoT devices.
IBM, the 6th biggest contributor to Linux Kernel, acquires RedHat for $34 Billion [Tech Drive-in]
The $34 billion all cash deal to purchase opensource pioneer Red Hat is IBM's biggest ever acquisition by far. The deal will give IBM a major foothold in fast-growing cloud computing market and the combined entity could give stiff competition to Amazon's cloud computing platform, AWS. But what about Red Hat and its future?
"Open source is the default choice for modern IT solutions, and I’m incredibly proud of the role Red Hat has played in making that a reality in the enterprise,” said Jim Whitehurst, President and CEO, Red Hat. “Joining forces with IBM will provide us with a greater level of scale, resources and capabilities to accelerate the impact of open source as the basis for digital transformation and bring Red Hat to an even wider audience – all while preserving our unique culture and unwavering commitment to open source innovation."Predicting the future can be tricky. A lot of things can go wrong. But one thing is sure, the acquisition of Red Hat by IBM is nothing like the Oracle - Sun deal. Between them, IBM and Red Hat must have contributed more to the open source community than any other organization.
How to Upgrade from Ubuntu 18.04 LTS to 18.10 'Cosmic Cuttlefish' [Tech Drive-in]
One day left before the final release of Ubuntu 18.10 codenamed "Cosmic Cuttlefish". This is how you make the upgrade from Ubuntu 18.04 to 18.10.
$ sudo apt update && sudo apt dist-upgrade
$ sudo apt autoremove
$ sudo gedit /etc/update-manager/release-upgrades
$ sudo do-release-upgrade -d
Meet 'Project Fusion': An Attempt to Integrate Tor into Firefox [Tech Drive-in]
A real private mode in Firefox? A Tor integrated Firefox could just be that. Tor Project is currently working with Mozilla to integrate Tor into Firefox.
"Our ultimate goal is a long way away because of the amount of work to do and the necessity to match the safety of Tor Browser in Firefox when providing a Tor mode. There's no guarantee this will happen, but I hope it will and we will keep working towards it."As If you want to help, Firefox bugs tagged 'fingerprinting' in the whiteboard are a good place to start. Further reading at TOR 'Project Fusion' page.
City of Bern Awards Switzerland's Largest Open Source Contract for its Schools [Tech Drive-in]
In another major win in a span of weeks for the proponents of open source solutions in EU, Bern, the capital of Switzerland, is pushing ahead with its plans to adopt open source tools as its software of choice for all its public schools. If all goes well, some 10,000 students in Switzerland schools could soon start getting their training using an IT infrastructure that is largely open source.
Germany says No to Public Cloud, Chooses Nextcloud's Open Source Solution [Tech Drive-in]
Germany's Federal Information Technology Centre (ITZBund) opts for an on-premise cloud solution which unlike those fancy Public cloud solutions, is completely private and under its direct control.
"Nextcloud is pleased to announce that the German Federal Information Technology Center (ITZBund) has chosen Nextcloud as their solution for efficient and secure file sharing and collaboration in a public tender. Nextcloud is operated by the ITZBund, the central IT service provider of the federal government, and made available to around 300,000 users. ITZBund uses a Nextcloud Enterprise Subscription to gain access to operational, scaling and security expertise of Nextcloud GmbH as well as long-term support of the software."ITZBund employs about 2,700 people that include IT specialists, engineers and network and security professionals. After the successful completion of the pilot, a public tender was floated by ITZBund which eventually selected Nextcloud as their preferred partner. Nextcloud scored high on security requirements and scalability, which it addressed through its unique Apps concept.
LG Makes its webOS Operating System Open Source, Again! [Tech Drive-in]
Not many might remember HP's capable webOS. The open source webOS operating system was HP's answer to Android and iOS platforms. It was slick and very user-friendly from the start, some even considered it a better alternative to Android for Tablets at the time. But like many other smaller players, HP's webOS just couldn't find enough takers, and the project was abruptly ended and sold off of to LG.
Announcement of LibreOffice 7.1.7 Community [Press Releases Archives - The Document Foundation Blog]
Berlin, November 4, 2021 – LibreOffice 7.1.7 Community, the seventh minor release of the LibreOffice 7.1 family, targeted to desktop productivity, is available for download from https://www.libreoffice.org/download/.
End user support is provided by volunteers via email and online resources: https://www.libreoffice.org/get-help/community-support/. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help us to make all of these resources available.
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners, with long-term support options, professional assistance, custom features and Service Level Agreements: https://www.libreoffice.org/download/libreoffice-in-business/.
LibreOffice 7.1.7 change log pages are available on TDF’s wiki: https://wiki.documentfoundation.org/Releases/7.1.7/RC1 (changed in RC1) and https://wiki.documentfoundation.org/Releases/7.1.7/RC2 (changed in RC2).
LibreOffice Community and the LibreOffice Enterprise family of products are based on the LibreOffice Technology platform, the result of years of development efforts with the objective of providing a state of the art office suite not only for the desktop but also for mobile and the cloud.
LibreOffice Technology based products for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/, while for App Stores and ChromeOS are listed here: https://www.libreoffice.org/download/libreoffice-from-microsoft-and-mac-app-stores/
LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools at https://www.libreoffice.org/donate.
LibreOffice 7.1.7 is built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org.
The Document Foundation announces LibreOffice 7.2.2 Community [Press Releases Archives - The Document Foundation Blog]
Berlin, October 14, 2021 – The Document Foundation announces LibreOffice 7.2.2 Community, the second minor release of the LibreOffice 7.2 family targeted at technology enthusiasts and power users, which is available for download from https://www.libreoffice.org/download/. This version includes 68 bug fixes and improvements to document compatibility.
LibreOffice 7.2.2 Community is also available for Apple Silicon from this link: https://download.documentfoundation.org/libreoffice/stable/7.2.2/mac/aarch64/.
For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners, with long-term support options, professional assistance, custom features and Service Level Agreements: https://www.libreoffice.org/download/libreoffice-in-business/.
LibreOffice Community and the LibreOffice Enterprise family of products are based on the LibreOffice Technology platform, the result of years of development efforts with the objective of providing a state of the art office suite not only for the desktop but also for mobile and the cloud.
Availability of LibreOffice 7.2.2 Community
LibreOffice 7.2.2 Community represents the bleeding edge in term of features for open source office suites. For users whose main objective is personal productivity and therefore prefer a release that has undergone more testing and bug fixing over the new features, The Document Foundation provides LibreOffice 7.1.6.
LibreOffice 7.2.2 change log pages are available on TDF’s wiki: https://wiki.documentfoundation.org/Releases/7.2.2/RC1 (changed in RC1) and https://wiki.documentfoundation.org/Releases/7.2.2/RC2 (changed in RC2).
LibreOffice Technology based products for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/, while for App Stores and ChromeOS are listed here: https://www.libreoffice.org/download/libreoffice-from-microsoft-and-mac-app-stores/
LibreOffice individual users are assisted by a global community of volunteers: https://www.libreoffice.org/get-help/community-support/. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help us to make all of these resources available.
LibreOffice users are invited to join the community at https://ask.libreoffice.org, where they can get and provide user-to-user support. People willing to contribute their time and professional skills to the project can visit the dedicated website at https://whatcanidoforlibreoffice.org.
LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools at https://www.libreoffice.org/donate.
LibreOffice 7.2.2 is built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org.
Django Authentication Video Tutorial [Simple is Better Than Complex]
In this tutorial series, we are going to explore Django’s authentication system by implementing sign up, login, logout, password change, password reset and protected views from non-authenticated users. This tutorial is organized in 8 videos, one for each topic, ranging from 4 min to 15 min each.
Starting a Django project from scratch, creating a virtual environment and an initial Django app. After that, we are going to setup the templates and create an initial view to start working on the authentication.
If you are already familiar with Django, you can skip this video and jump to the Sign Up tutorial below.
First thing we are going to do is implement a sign up view using the built-in UserCreationForm
. In this video you
are also going to get some insights on basic Django form processing.
In this video tutorial we are going to first include the built-in Django auth URLs to our project and proceed to implement the login view.
In this tutorial we are going to include Django logout and also start playing with conditional templates, displaying different content depending if the user is authenticated or not.
Next The password change is a view where an authenticated user can change their password.
This tutorial is perhaps the most complicated one, because it involves several views and also sending emails. In this video tutorial you are going to learn how to use the default implementation of the password reset process and how to change the email messages.
After implementing the whole authentication system, this video gives you an overview on how to protect some views from
non authenticated users by using the @login_required
decorator and also using class-based views mixins.
Extra video showing how to integrate Django with Bootstrap 4 and how to use Django Crispy Forms to render Bootstrap forms properly. This video also include some general advices and tips about using Bootstrap 4.
If you want to learn more about Django authentication and some extra stuff related to it, like how to use Bootstrap to make your auth forms look good, or how to write unit tests for your auth-related views, you can read the forth part of my beginners guide to Django: A Complete Beginner’s Guide to Django - Part 4 - Authentication.
Of course the official documentation is the best source of information: Using the Django authentication system
The code used in this tutorial: github.com/sibtc/django-auth-tutorial-example
This was my first time recording this kind of content, so your feedback is highly appreciated. Please let me know what you think!
And don’t forget to subscribe to my YouTube channel! I will post exclusive Django tutorials there. So stay tuned! :-)
What You Should Know About The Django User Model [Simple is Better Than Complex]
The goal of this article is to discuss the caveats of the default Django user model implementation and also to give you some advice on how to address them. It is important to know the limitations of the current implementation so to avoid the most common pitfalls.
Something to keep in mind is that the Django user model is heavily based on its initial implementation that is at least 16 years old. Because user and authentication is a core part of the majority of the web applications using Django, most of its quirks persisted on the subsequent releases so to maintain backward compatibility.
The good news is that Django offers many ways to override and customize its default implementation so to fit your application needs. But some of those changes must be done right at the beginning of the project, otherwise it would be too much of a hassle to change the database structure after your application is in production.
Below, the topics that we are going to cover in this article:
First, let’s explore the caveats and next we discuss the options.
Even though the username
field is marked as unique, by default it is not case-sensitive. That means the username
john.doe
and John.doe
identifies two different users in your application.
This can be a security issue if your application has social aspects that builds around the username
providing a
public URL to a profile like Twitter, Instagram or GitHub for example.
It also delivers a poor user experience because people doesn’t expect that john.doe
is a different username than
John.Doe
, and if the user does not type the username exactly in the same way when they created their account, they
might be unable to log in to your application.
Possible Solutions:
CharField
with the CICharField
instead (which is case-insensitive)get_by_natural_key
from the UserManager
to query the database using iexact
ModelBackend
implementationThis is not necessarily an issue, but it is important for you to understand what that means and what are the effects.
By default the username field accepts letters, numbers and the characters: @
, .
, +
, -
, and _
.
The catch here is on which letters it accepts.
For example, joão
would be a valid username. Similarly, Джон
or 約翰
would also be a valid username.
Django ships with two username validators: ASCIIUsernameValidator
and UnicodeUsernameValidator
. If the intended
behavior is to only accept letters from A-Z, you may want to switch the username validator to use ASCII letters only
by using the ASCIIUsernameValidator
.
Possible Solutions:
ASCIIUsernameValidator
Multiple users can have the same email address associated with their account.
By default the email is used to recover a password. If there is more than one user with the same email address, the password reset will be initiated for all accounts and the user will receive an email for each active account.
It also may not be an issue but this will certainly make it impossible to offer the option to authenticate the user using the email address (like those sites that allow you to login with username or email address).
Possible Solutions:
AbstractBaseUser
to define the email field from scratchBy default the email field does not allow null
, however it allow blank
values, so it pretty much allows users to
not inform a email address.
Also, this may not be an issue for your application. But if you intend to allow users to log in with email it may be a good idea to enforce the registration of this field.
When using the built-in resources like user creation forms or when using model forms you need to pay attention to this detail if the desired behavior is to always have the user email.
Possible Solutions:
AbstractBaseUser
to define the email field from scratchThere is a small catch on the user creation process that if the set_password
method is called passing None
as a
parameter, it will produce an unusable password. And that also means that the user will be unable to start a password
reset to set the first password.
You can end up in that situation if you are using social networks like Facebook or Twitter to allow the user to create an account on your website.
Another way of ending up in this situation is simply by creating a user using the User.objects.create_user()
or
User.objects.create_superuser()
without providing an initial password.
Possible Solutions:
Changing the user model is something you want to do early on. After your database schema is generated and your database is populated it will be very tricky to swap the user model.
The reason why is that you are likely going to have some foreign key created referencing the user table, also Django internal tables will create hard references to the user table. And if you plan to change that later on you will need to change and migrate the database by yourself.
Possible Solutions:
AbstractUser
and change a single configuration on the
settings module. This will give you a tremendous freedom and it will make things way easier in the future should the
requirements change.To address the limitations we discussed in this article we have two options: (1) implement workarounds to fix the behavior of the default user model; (2) replace the default user model altogether and fix the issues for good.
What is going to dictate what approach you need to use is in what stage your project currently is.
django.contrib.auth.models.User
, go
with the first solution implementing the workarounds;First let’s have a look on a few workarounds that you can implement if you project is already in production. Keep in
mind that those solutions assume that you don’t have direct access to the User model, that is, you are currently using
the default User model importing it from django.contrib.auth.models
.
If you did replace the User model, then jump to the next section to get better tips on how to fix the issues.
Before making any changes you need to make sure you don’t have conflicting usernames on your database. For example,
if you have a User with the username maria
and another with the username Maria
you have to plan a data migration
first. It is difficult to tell you what to do because it really depends on how you want to handle it. One option is
to append some digits after the username, but that can disturb the user experience.
Now let’s say you checked your database and there are no conflicting usernames and you are good to go.
First thing you need to do is to protect your sign up forms to not allow conflicting usernames to create accounts.
Then on your user creation form, used to sign up, you could validate the username like this:
def clean_username(self):
username = self.cleaned_data.get("username")
if User.objects.filter(username__iexact=username).exists():
self.add_error("username", "A user with this username already exists.")
return username
If you are handling user creation in a rest API using DRF, you can do something similar in your serializer:
def validate_username(self, value):
if User.objects.filter(username__iexact=value).exists():
raise serializers.ValidationError("A user with this username already exists.")
return value
In the previous example the mentioned ValidationError
is the one defined in the DRF.
The iexact
notation on the queryset parameter will query the database ignoring the case.
Now that the user creation is sanitized we can proceed to define a custom authentication backend.
Create a module named backends.py anywhere in your project and add the following snippet:
backends.py
from django.contrib.auth import get_user_model
from django.contrib.auth.backends import ModelBackend
class CaseInsensitiveModelBackend(ModelBackend):
def authenticate(self, request, username=None, password=None, **kwargs):
UserModel = get_user_model()
if username is None:
username = kwargs.get(UserModel.USERNAME_FIELD)
try:
case_insensitive_username_field = '{}__iexact'.format(UserModel.USERNAME_FIELD)
user = UserModel._default_manager.get(**{case_insensitive_username_field: username})
except UserModel.DoesNotExist:
# Run the default password hasher once to reduce the timing
# difference between an existing and a non-existing user (#20760).
UserModel().set_password(password)
else:
if user.check_password(password) and self.user_can_authenticate(user):
return user
Now switch the authentication backend in the settings.py module:
settings.py
AUTHENTICATION_BACKENDS = ('mysite.core.backends.CaseInsensitiveModelBackend', )
Please note that 'mysite.core.backends.CaseInsensitiveModelBackend'
must be changed to the valid path, where you
created the backends.py module.
It is important to have handled all conflicting users before changing the authentication backend because otherwise it
could raise a 500 exception MultipleObjectsReturned
.
Here we can borrow the built-in UsernameField
and customize it to append the ASCIIUsernameValidator
to the list of
validators:
from django.contrib.auth.forms import UsernameField
from django.contrib.auth.validators import ASCIIUsernameValidator
class ASCIIUsernameField(UsernameField):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.validators.append(ASCIIUsernameValidator())
Then on the Meta
of your User creation form you can replace the form field class:
class UserCreationForm(forms.ModelForm):
# field definitions...
class Meta:
model = User
fields = ("username",)
field_classes = {'username': ASCIIUsernameField}
Here all you can do is to sanitize and handle the user input in all views where you user can modify its email address.
You have to include the email field on your sign up form/serializer as well.
Then just make it mandatory like this:
class UserCreationForm(forms.ModelForm):
email = forms.EmailField(required=True)
# other field definitions...
class Meta:
model = User
fields = ("username",)
field_classes = {'username': ASCIIUsernameField}
def clean_email(self):
email = self.cleaned_data.get("email")
if User.objects.filter(email__iexact=email).exists():
self.add_error("email", _("A user with this email already exists."))
return email
You can also check a complete and detailed example of this form on the project shared together with this post: userworkarounds
Now I’m going to show you how I usually like to extend and replace the default User model. It is a little bit verbose but that is the strategy that will allow you to access all the inner parts of the User model and make it better.
To replace the User model you have two options: extending the AbstractBaseUser
or extending the AbstractUser
.
To illustrate what that means I draw the following diagram of how the default Django model is implemented:
The green circle identified with the label User
is actually the one you import from django.contrib.auth.models
and
that is the implementation that we discussed in this article.
If you look at the source code, its implementation looks like this:
class User(AbstractUser):
class Meta(AbstractUser.Meta):
swappable = 'AUTH_USER_MODEL'
So basically it is just an implementation of the AbstractUser
. Meaning all the fields and logic are implemented in the
abstract class.
It is done that way so we can easily extend the User
model by creating a sub-class of the AbstractUser
and add other
features and fields you like.
But there is a limitation that you can’t override an existing model field. For example, you can re-define the email field to make it mandatory or to change its length.
So extending the AbstractUser
class is only useful when you want to modify its methods, add more fields or swap the
objects
manager.
If you want to remove a field or change how the field is defined, you have to extend the user model from the
AbstractBaseUser
.
The best strategy to have full control over the user model is creating a new concrete class from the PermissionsMixin
and the AbstractBaseUser
.
Note that the PermissionsMixin
is only necessary if you intend to use the Django admin or the built-in permissions
framework. If you are not planning to use it you can leave it out. And in the future if things change you can add
the mixin and migrate the model and you are ready to go.
So the implementation strategy looks like this:
Now I’m going to show you my go-to implementation. I always use PostgreSQL which, in my opinion, is the best database
to use with Django. At least it is the one with most support and features anyway. So I’m going to show an approach
that use the PostgreSQL’s CITextExtension
. Then I will show some options if you are using other database engines.
For this implementation I always create an app named accounts
:
django-admin startapp accounts
Then before adding any code I like to create an empty migration to install the PostgreSQL extensions that we are going to use:
python manage.py makemigrations accounts --empty --name="postgres_extensions"
Inside the migrations
directory of the accounts
app you will find an empty migration called
0001_postgres_extensions.py
.
Modify the file to include the extension installation:
migrations/0001_postgres_extensions.py
from django.contrib.postgres.operations import CITextExtension
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
]
operations = [
CITextExtension()
]
Now let’s implement our model. Open the models.py
file inside the accounts
app.
I always grab the initial code directly from Django’s source on GitHub, copying the AbstractUser
implementation, and
modify it accordingly:
accounts/models.py
from django.contrib.auth.base_user import AbstractBaseUser
from django.contrib.auth.models import PermissionsMixin, UserManager
from django.contrib.auth.validators import ASCIIUsernameValidator
from django.contrib.postgres.fields import CICharField, CIEmailField
from django.core.mail import send_mail
from django.db import models
from django.utils import timezone
from django.utils.translation import gettext_lazy as _
class CustomUser(AbstractBaseUser, PermissionsMixin):
username_validator = ASCIIUsernameValidator()
username = CICharField(
_("username"),
max_length=150,
unique=True,
help_text=_("Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only."),
validators=[username_validator],
error_messages={
"unique": _("A user with that username already exists."),
},
)
first_name = models.CharField(_("first name"), max_length=150, blank=True)
last_name = models.CharField(_("last name"), max_length=150, blank=True)
email = CIEmailField(
_("email address"),
unique=True,
error_messages={
"unique": _("A user with that email address already exists."),
},
)
is_staff = models.BooleanField(
_("staff status"),
default=False,
help_text=_("Designates whether the user can log into this admin site."),
)
is_active = models.BooleanField(
_("active"),
default=True,
help_text=_(
"Designates whether this user should be treated as active. Unselect this instead of deleting accounts."
),
)
date_joined = models.DateTimeField(_("date joined"), default=timezone.now)
objects = UserManager()
EMAIL_FIELD = "email"
USERNAME_FIELD = "username"
REQUIRED_FIELDS = ["email"]
class Meta:
verbose_name = _("user")
verbose_name_plural = _("users")
def clean(self):
super().clean()
self.email = self.__class__.objects.normalize_email(self.email)
def get_full_name(self):
"""
Return the first_name plus the last_name, with a space in between.
"""
full_name = "%s %s" % (self.first_name, self.last_name)
return full_name.strip()
def get_short_name(self):
"""Return the short name for the user."""
return self.first_name
def email_user(self, subject, message, from_email=None, **kwargs):
"""Send an email to this user."""
send_mail(subject, message, from_email, [self.email], **kwargs)
Let’s review what we changed here:
username_validator
to use ASCIIUsernameValidator
username
field now is using CICharField
which is not case-sensitiveemail
field is now mandatory, unique and is using CIEmailField
which is not case-sensitiveOn the settings module, add the following configuration:
settings.py
AUTH_USER_MODEL = "accounts.CustomUser"
Now we are ready to create our migrations:
python manage.py makemigrations
Apply the migrations:
python manage.py migrate
And you should get a similar result if you are just creating your project and if there is no other models/apps:
Operations to perform:
Apply all migrations: accounts, admin, auth, contenttypes, sessions
Running migrations:
Applying contenttypes.0001_initial... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0001_initial... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying auth.0007_alter_validators_add_error_messages... OK
Applying auth.0008_alter_user_username_max_length... OK
Applying auth.0009_alter_user_last_name_max_length... OK
If you check your database scheme you will see that there is no auth_user
table (which is the default one), and now
the user is stored on the table accounts_customuser
:
And all the Foreign Keys to the user model will be created pointing to this table. That’s why it is important to do it right in the beginning of your project, before you created the database scheme.
Now you have all the freedom. You can replace the first_name
and last_name
and use just one field called name
.
You could remove the username
field and identify your User model with the email
(then just make sure you change
the property USERNAME_FIELD
to email
).
You can grab the source code on GitHub: customuser
If you are not using PostgreSQL and want to implement case-insensitive authentication and you have direct access to the User model, a nice hack is to create a custom manager for the User model, like this:
accounts/models.py
from django.contrib.auth.models import AbstractUser, UserManager
class CustomUserManager(UserManager):
def get_by_natural_key(self, username):
case_insensitive_username_field = '{}__iexact'.format(self.model.USERNAME_FIELD)
return self.get(**{case_insensitive_username_field: username})
class CustomUser(AbstractBaseUser, PermissionsMixin):
# all the fields, etc...
objects = CustomUserManager()
# meta, methods, etc...
Then you could also sanitize the username field on the clean()
method to always save it as lowercase so you don’t have
to bother having case variant/conflicting usernames:
def clean(self):
super().clean()
self.email = self.__class__.objects.normalize_email(self.email)
self.username = self.username.lower()
In this tutorial we discussed a few caveats of the default User model implementation and presented a few options to address those issues.
The takeaway message here is: always replace the default User model.
If your project is already in production, don’t panic: there are ways to fix those issues following the recommendations in this post.
I also have two detailed blog posts on how to make the username field case-insensitive and other about how to extend the django user model:
You can also explore the source code presented in this post on GitHub:
How to Start a Production-Ready Django Project [Simple is Better Than Complex]
In this tutorial I’m going to show you how I usually start and organize a new Django project nowadays. I’ve tried many different configurations and ways to organize the project, but for the past 4 years or so this has been consistently my go-to setup.
Please note that this is not intended to be a “best practice” guide or to fit every use case. It’s just the way I like to use Django and that’s also the way that I found that allow your project to grow in healthy way.
Index
Usually those are the premises I take into account when setting up a project:
Usually I work with three environment dimensions in my code: local, tests and production. I like to see it
as a “mode” how I run the project. What dictates which mode I’m running the project is which settings.py
I’m currently
using.
The local dimension always come first. It is the settings and setup that a developer will use on their local machine.
All the defaults and configurations must be done to attend the local development environment first.
The reason why I like to do it that way is that the project must be as simple as possible for a new hire to clone the repository, run the project and start coding.
The production environment usually will be configured and maintained by experienced developers and by those who are more familiar with the code base itself. And because the deployment should be automated, there is no reason for people being re-creating the production server over and over again. So it is perfectly fine for the production setup require a few extra steps and configuration.
The tests environment will be also available locally, so developers can test the code and run the static checks.
But the idea of the tests environment is to expose it to a CI environment like Travis CI, Circle CI, AWS Code Pipeline, etc.
It is a simple setup that you can install the project and run all the unit tests.
The production dimension is the real deal. This is the environment that goes live without the testing and debugging utilities.
I also use this “mode” or dimension to run the staging server.
A staging server is where you roll out new features and bug fixes before applying to the production server.
The idea here is that your staging server should run in production mode, and the only difference is going to be your static/media server and database server. And this can be achieved just by changing the configuration to tell what is the database connection string for example.
But the main thing is that you should not have any conditional in your code that checks if it is the production or staging server. The project should run exactly in the same way as in production.
Right from the beginning it is a good idea to setup a remote version control service. My go-to option is Git on GitHub. Usually I create the remote repository first then clone it on my local machine to get started.
Let’s say our project is called simple
, after creating the repository on GitHub I will create a directory named
simple
on my local machine, then within the simple
directory I will clone the repository, like shown on the
structure below:
simple/
└── simple/ (git repo)
Then I create the virtualenv
outside of the Git repository:
simple/
├── simple/
└── venv/
Then alongside the simple
and venv
directories I may place some other support files related to the project which I
do not plan to commit to the Git repository.
The reason I do that is because it is more convenient to destroy and re-create/re-clone both the virtual environment or the repository itself.
It is also good to store your virtual environment outside of the git repository/project root so you don’t need to bother ignoring its path when using libs like flake8, isort, black, tox, etc.
You can also use tools like virtualenvwrapper
to manage your virtual environments, but I prefer doing it that way
because everything is in one place. And if I no longer need to keep a given project on my local machine, I can delete
it completely without leaving behind anything related to the project on my machine.
The next step is installing Django inside the virtualenv so we can use the django-admin
commands.
source venv/bin/activate
pip install django
Inside the simple
directory (where the git repository was cloned) start a new project:
django-admin startproject simple .
Attention to the .
in the end of the command. It is necessary to not create yet another directory called simple
.
So now the structure should be something like this:
simple/ <- (1) Wrapper directory with all project contents including the venv
├── simple/ <- (2) Project root and git repository
│ ├── .git/
│ ├── manage.py
│ └── simple/ <- (3) Project package, apps, templates, static, etc
│ ├── __init__.py
│ ├── asgi.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
└── venv/
At this point I already complement the project package directory with three extra directories for templates
, static
and locale
.
Both templates
and static
we are going to manage at a project-level and app-level. Those are refer to the global
templates and static files.
The locale
is necessary in case you are using i18n
to translate your application to other languages. So here
is where you are going to store the .mo
and .po
files.
So the structure now should be something like this:
simple/
├── simple/
│ ├── .git/
│ ├── manage.py
│ └── simple/
│ ├── locale/
│ ├── static/
│ ├── templates/
│ ├── __init__.py
│ ├── asgi.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
└── venv/
Inside the project root (2) I like to create a directory called requirements
with all the .txt
files, breaking down
the project dependencies like this:
base.txt
: Main dependencies, strictly necessary to make the project run. Common to all environmentstests.txt
: Inherits from base.txt
+ test utilitieslocal.txt
: Inherits from tests.txt
+ development utilitiesproduction.txt
: Inherits from base.txt
+ production only dependenciesNote that I do not have a staging.txt
requirements file, that’s because the staging environment is going to use the
production.txt
requirements so we have an exact copy of the production environment.
simple/
├── simple/
│ ├── .git/
│ ├── manage.py
│ ├── requirements/
│ │ ├── base.txt
│ │ ├── local.txt
│ │ ├── production.txt
│ │ └── tests.txt
│ └── simple/
│ ├── locale/
│ ├── static/
│ ├── templates/
│ ├── __init__.py
│ ├── asgi.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
└── venv/
Now let’s have a look inside each of those requirements file and what are the python libraries that I always use no matter what type of Django project I’m developing.
base.txt
dj-database-url==0.5.0
Django==3.2.4
psycopg2-binary==2.9.1
python-decouple==3.4
pytz==2021.1
.env
files in a safe waysettings.py
module. It also helps with decoupling configuration from source codetests.txt
-r base.txt
black==21.6b0
coverage==5.5
factory-boy==3.2.0
flake8==3.9.2
isort==5.9.1
tox==3.23.1
The -r base.txt
inherits all the requirements defined in the base.txt
file
local.txt
-r tests.txt
django-debug-toolbar==3.2.1
ipython==7.25.0
The -r tests.txt
inherits all the requirements defined in the base.txt
and tests.txt
file
production.txt
-r base.txt
gunicorn==20.1.0
sentry-sdk==1.1.0
The -r base.txt
inherits all the requirements defined in the base.txt
file
Also following the environments and modes premise I like to setup multiple settings modules. Those are going to serve as the entry point to determine in which mode I’m running the project.
Inside the simple
project package, I create a new directory called settings
and break down the files like this:
simple/ (1)
├── simple/ (2)
│ ├── .git/
│ ├── manage.py
│ ├── requirements/
│ │ ├── base.txt
│ │ ├── local.txt
│ │ ├── production.txt
│ │ └── tests.txt
│ └── simple/ (3)
│ ├── locale/
│ ├── settings/
│ │ ├── __init__.py
│ │ ├── base.py
│ │ ├── local.py
│ │ ├── production.py
│ │ └── tests.py
│ ├── static/
│ ├── templates/
│ ├── __init__.py
│ ├── asgi.py
│ ├── urls.py
│ └── wsgi.py
└── venv/
Note that I removed the settings.py
that used to live inside the simple/ (3)
directory.
The majority of the code will live inside the base.py
settings module.
Everything that we can set only once in the base.py
and change its value using python-decouple
we should keep in the
base.py
and never repeat/override in the other settings modules.
After the removal of the main settings.py
a nice touch is to modify the manage.py
file to set the
local.py
as the default settings module so we can still run commands like python manage.py runserver
without any
further parameters:
manage.py
#!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys
def main():
"""Run administrative tasks."""
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'simple.settings.local') # <- here!
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
if __name__ == '__main__':
main()
Now let’s have a look on each of those settings modules.
base.py
from pathlib import Path
import dj_database_url
from decouple import Csv, config
BASE_DIR = Path(__file__).resolve().parent.parent
# ==============================================================================
# CORE SETTINGS
# ==============================================================================
SECRET_KEY = config("SECRET_KEY", default="django-insecure$simple.settings.local")
DEBUG = config("DEBUG", default=True, cast=bool)
ALLOWED_HOSTS = config("ALLOWED_HOSTS", default="127.0.0.1,localhost", cast=Csv())
INSTALLED_APPS = [
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
]
DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"
ROOT_URLCONF = "simple.urls"
INTERNAL_IPS = ["127.0.0.1"]
WSGI_APPLICATION = "simple.wsgi.application"
# ==============================================================================
# MIDDLEWARE SETTINGS
# ==============================================================================
MIDDLEWARE = [
"django.middleware.security.SecurityMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
]
# ==============================================================================
# TEMPLATES SETTINGS
# ==============================================================================
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [BASE_DIR / "templates"],
"APP_DIRS": True,
"OPTIONS": {
"context_processors": [
"django.template.context_processors.debug",
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
],
},
},
]
# ==============================================================================
# DATABASES SETTINGS
# ==============================================================================
DATABASES = {
"default": dj_database_url.config(
default=config("DATABASE_URL", default="postgres://simple:simple@localhost:5432/simple"),
conn_max_age=600,
)
}
# ==============================================================================
# AUTHENTICATION AND AUTHORIZATION SETTINGS
# ==============================================================================
AUTH_PASSWORD_VALIDATORS = [
{
"NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
},
{
"NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
},
{
"NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
},
{
"NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
},
]
# ==============================================================================
# I18N AND L10N SETTINGS
# ==============================================================================
LANGUAGE_CODE = config("LANGUAGE_CODE", default="en-us")
TIME_ZONE = config("TIME_ZONE", default="UTC")
USE_I18N = True
USE_L10N = True
USE_TZ = True
LOCALE_PATHS = [BASE_DIR / "locale"]
# ==============================================================================
# STATIC FILES SETTINGS
# ==============================================================================
STATIC_URL = "/static/"
STATIC_ROOT = BASE_DIR.parent.parent / "static"
STATICFILES_DIRS = [BASE_DIR / "static"]
STATICFILES_FINDERS = (
"django.contrib.staticfiles.finders.FileSystemFinder",
"django.contrib.staticfiles.finders.AppDirectoriesFinder",
)
# ==============================================================================
# MEDIA FILES SETTINGS
# ==============================================================================
MEDIA_URL = "/media/"
MEDIA_ROOT = BASE_DIR.parent.parent / "media"
# ==============================================================================
# THIRD-PARTY SETTINGS
# ==============================================================================
# ==============================================================================
# FIRST-PARTY SETTINGS
# ==============================================================================
SIMPLE_ENVIRONMENT = config("SIMPLE_ENVIRONMENT", default="local")
A few comments on the overall base settings file contents:
config()
are from the python-decouple
library. It is exposing the configuration to an environment variable and
retrieving its value accordingly to the expected data type. Read more about python-decouple
on this guide:
How to Use Python DecoupleSECRET_KEY
, DEBUG
and ALLOWED_HOSTS
defaults to local/development environment values.
That means a new developer won’t need to set a local .env
and provide some initial value to run locallydj_database_url
to translate this one line string to a Python
dictionary as Django expectsMEDIA_ROOT
we are navigating two directories up to create a media
directory outside the git
repository but inside our project workspace (inside the directory simple/ (1)
). So everything is handy and we won’t
be committing test uploads to our repositorybase.py
settings I reserve two blocks for third-party Django libraries that I may install, such
as Django Rest Framework or Django Crispy Forms. And the first-party settings refer to custom settings that I may create
exclusively for our project. Usually I will prefix them with the project name, like SIMPLE_XXX
local.py
# flake8: noqa
from .base import *
INSTALLED_APPS += ["debug_toolbar"]
MIDDLEWARE.insert(0, "debug_toolbar.middleware.DebugToolbarMiddleware")
# ==============================================================================
# EMAIL SETTINGS
# ==============================================================================
EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend"
Here is where I will setup Django Debug Toolbar for example. Or set the email backend to display the sent emails on console instead of having to setup a valid email server to work on the project.
All the code that is only relevant for the development process goes here.
You can use it to setup other libs like Django Silk to run profiling without exposing it to production.
tests.py
# flake8: noqa
from .base import *
PASSWORD_HASHERS = ["django.contrib.auth.hashers.MD5PasswordHasher"]
class DisableMigrations:
def __contains__(self, item):
return True
def __getitem__(self, item):
return None
MIGRATION_MODULES = DisableMigrations()
Here I add configurations that help us run the test cases faster. Sometimes disabling the migrations may not work if you have interdependencies between the apps models so Django may fail to create a database without the migrations.
In some projects it is better to keep the test database after the execution.
production.py
# flake8: noqa
import sentry_sdk
from sentry_sdk.integrations.django import DjangoIntegration
import simple
from .base import *
# ==============================================================================
# SECURITY SETTINGS
# ==============================================================================
CSRF_COOKIE_SECURE = True
CSRF_COOKIE_HTTPONLY = True
SECURE_HSTS_SECONDS = 60 * 60 * 24 * 7 * 52 # one year
SECURE_HSTS_INCLUDE_SUBDOMAINS = True
SECURE_SSL_REDIRECT = True
SECURE_BROWSER_XSS_FILTER = True
SECURE_CONTENT_TYPE_NOSNIFF = True
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
SESSION_COOKIE_SECURE = True
# ==============================================================================
# THIRD-PARTY APPS SETTINGS
# ==============================================================================
sentry_sdk.init(
dsn=config("SENTRY_DSN", default=""),
environment=SIMPLE_ENVIRONMENT,
release="simple@%s" % simple.__version__,
integrations=[DjangoIntegration()],
)
The most important part here on the production settings is to enable all the security settings Django offer. I like to do it that way because you can’t run the development server with most of those configurations on.
The other thing is the Sentry configuration.
Note the simple.__version__
on the release. Next we are going to explore how I usually manage the version of the
project.
I like to reuse Django’s get_version
utility for a simple and PEP 440 complaint version identification.
Inside the project’s __init__.py
module:
simple/
├── simple/
│ ├── .git/
│ ├── manage.py
│ ├── requirements/
│ └── simple/
│ ├── locale/
│ ├── settings/
│ ├── static/
│ ├── templates/
│ ├── __init__.py <-- here!
│ ├── asgi.py
│ ├── urls.py
│ └── wsgi.py
└── venv/
You can do something like this:
from django import get_version
VERSION = (1, 0, 0, "final", 0)
__version__ = get_version(VERSION)
The only down side of using the get_version
directly from the Django module is that it won’t be able to resolve the
git hash for alpha versions.
A possible solution is making a copy of the django/utils/version.py
file to your project, and then you import it
locally, so it will be able to identify your git repository within the project folder.
But it also depends what kind of versioning you are using for your project. If the version of your project is not really relevant to the end user and you want to keep track of it for internal management like to identify the release on a Sentry issue, you could use a date-based release versioning.
A Django app is a Python package that you “install” using the INSTALLED_APPS
in your settings file. An app can live pretty
much anywhere: inside or outside the project package or even in a library that you installed using pip
.
Indeed, your Django apps may be reusable on other projects. But that doesn’t mean it should. Don’t let it destroy your project design or don’t get obsessed over it. Also, it shouldn’t necessarily represent a “part” of your website/web application.
It is perfectly fine for some apps to not have models, or other apps have only views. Some of your modules doesn’t even need to be a Django app at all. I like to see my Django projects as a big Python package and organize it in a way that makes sense, and not try to place everything inside reusable apps.
The general recommendation of the official Django documentation is to place your apps in the project root (alongside
the manage.py file, identified here in this tutorial by the simple/ (2)
folder).
But actually I prefer to create my apps inside the project package (identified in this tutorial by the simple/ (3)
folder). I create a module named apps
and then inside the apps
I create my Django apps. The main reason why is that
it creates a nice namespace for the app. It helps you easily identify that a particular import is part of your
project. Also this namespace helps when creating logging rules to handle events in a different way.
Here is an example of how I do it:
simple/ (1)
├── simple/ (2)
│ ├── .git/
│ ├── manage.py
│ ├── requirements/
│ └── simple/ (3)
│ ├── apps/ <-- here!
│ │ ├── __init__.py
│ │ ├── accounts/
│ │ └── core/
│ ├── locale/
│ ├── settings/
│ ├── static/
│ ├── templates/
│ ├── __init__.py
│ ├── asgi.py
│ ├── urls.py
│ └── wsgi.py
└── venv/
In the example above the folders accounts/
and core/
are Django apps created with the command django-admin startapp
.
Those two apps are also always in my project. The accounts
app is the one that I use the replace the default Django
User
model and also the place where I eventually create password reset, account activation, sign ups, etc.
The core
app I use for general/global implementations. For example to define a model that will be used across most
of the other apps. I try to keep it decoupled from other apps, not importing other apps resources. It usually is a good
place to implement general purpose or reusable views and mixins.
Something to pay attention when using this approach is that you need to change the name
of the apps configuration,
inside the apps.py
file of the Django app:
accounts/apps.py
from django.apps import AppConfig
class AccountsConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'accounts' # <- this is the default name created by the startapp command
You should rename it like this, to respect the namespace:
from django.apps import AppConfig
class AccountsConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'simple.apps.accounts' # <- change to this!
Then on your INSTALLED_APPS
you are going to create a reference to your models like this:
INSTALLED_APPS = [
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
"simple.apps.accounts",
"simple.apps.core",
]
The namespace also helps to organize your INSTALLED_APPS
making your project apps easily recognizable.
This is what my app structure looks like:
simple/ (1)
├── simple/ (2)
│ ├── .git/
│ ├── manage.py
│ ├── requirements/
│ └── simple/ (3)
│ ├── apps/
│ │ ├── accounts/ <- My app structure
│ │ │ ├── migrations/
│ │ │ │ └── __init__.py
│ │ │ ├── static/
│ │ │ │ └── accounts/
│ │ │ ├── templates/
│ │ │ │ └── accounts/
│ │ │ ├── tests/
│ │ │ │ ├── __init__.py
│ │ │ │ └── factories.py
│ │ │ ├── __init__.py
│ │ │ ├── admin.py
│ │ │ ├── apps.py
│ │ │ ├── constants.py
│ │ │ ├── models.py
│ │ │ └── views.py
│ │ ├── core/
│ │ └── __init__.py
│ ├── locale/
│ ├── settings/
│ ├── static/
│ ├── templates/
│ ├── __init__.py
│ ├── asgi.py
│ ├── urls.py
│ └── wsgi.py
└── venv/
The first thing I do is create a folder named tests
so I can break down my tests into several files. I always add a
factories.py
to create my model factories using the factory-boy
library.
For both static
and templates
always create first a directory with the same name as the app to avoid name collisions
when Django collect all static files and try to resolve the templates.
The admin.py
may be there or not depending if I’m using the Django Admin contrib app.
Other common modules that you may have is a utils.py
, forms.py
, managers.py
, services.py
etc.
Now I’m going to show you the configuration that I use for tools like isort
, black
, flake8
, coverage
and tox
.
The .editorconfig
file is a standard recognized by all major IDEs and code editors. It helps the editor understand
what is the file formatting rules used in the project.
It tells the editor if the project is indented with tabs or spaces. How many spaces/tabs. What’s the max length for a line of code.
I like to use Django’s .editorconfig
file. Here is what it looks like:
.editorconfig
# https://editorconfig.org/
root = true
[*]
indent_style = space
indent_size = 4
insert_final_newline = true
trim_trailing_whitespace = true
end_of_line = lf
charset = utf-8
# Docstrings and comments use max_line_length = 79
[*.py]
max_line_length = 119
# Use 2 spaces for the HTML files
[*.html]
indent_size = 2
# The JSON files contain newlines inconsistently
[*.json]
indent_size = 2
insert_final_newline = ignore
[**/admin/js/vendor/**]
indent_style = ignore
indent_size = ignore
# Minified JavaScript files shouldn't be changed
[**.min.js]
indent_style = ignore
insert_final_newline = ignore
# Makefiles always use tabs for indentation
[Makefile]
indent_style = tab
# Batch files use tabs for indentation
[*.bat]
indent_style = tab
[docs/**.txt]
max_line_length = 79
[*.yml]
indent_size = 2
Flake8 is a Python library that wraps PyFlakes, pycodestyle and Ned Batchelder’s McCabe script. It is a great toolkit for checking your code base against coding style (PEP8), programming errors (like “library imported but unused” and “Undefined name”) and to check cyclomatic complexity.
To learn more about flake8, check this tutorial I posted a while a go: How to Use Flake8.
setup.cfg
[flake8]
exclude = .git,.tox,*/migrations/*
max-line-length = 119
isort is a Python utility / library to sort imports alphabetically, and automatically separated into sections.
To learn more about isort, check this tutorial I posted a while a go: How to Use Python isort Library.
setup.cfg
[isort]
force_grid_wrap = 0
use_parentheses = true
combine_as_imports = true
include_trailing_comma = true
line_length = 119
multi_line_output = 3
skip = migrations
default_section = THIRDPARTY
known_first_party = simple
known_django = django
sections=FUTURE,STDLIB,DJANGO,THIRDPARTY,FIRSTPARTY,LOCALFOLDER
Pay attention to the known_first_party
, it should be the name of your project so isort can group your project’s
imports.
Black is a life changing library to auto-format your Python applications. There is no way I’m coding with Python nowadays without using Black.
Here is the basic configuration that I use:
pyproject.toml
[tool.black]
line-length = 119
target-version = ['py38']
include = '\.pyi?$'
exclude = '''
/(
\.eggs
| \.git
| \.hg
| \.mypy_cache
| \.tox
| \.venv
| _build
| buck-out
| build
| dist
| migrations
)/
'''
In this tutorial I described my go-to project setup when working with Django. That’s pretty much how I start all my projects nowadays.
Here is the final project structure for reference:
simple/
├── simple/
│ ├── .git/
│ ├── .gitignore
│ ├── .editorconfig
│ ├── manage.py
│ ├── pyproject.toml
│ ├── requirements/
│ │ ├── base.txt
│ │ ├── local.txt
│ │ ├── production.txt
│ │ └── tests.txt
│ ├── setup.cfg
│ └── simple/
│ ├── __init__.py
│ ├── apps/
│ │ ├── accounts/
│ │ │ ├── migrations/
│ │ │ │ └── __init__.py
│ │ │ ├── static/
│ │ │ │ └── accounts/
│ │ │ ├── templates/
│ │ │ │ └── accounts/
│ │ │ ├── tests/
│ │ │ │ ├── __init__.py
│ │ │ │ └── factories.py
│ │ │ ├── __init__.py
│ │ │ ├── admin.py
│ │ │ ├── apps.py
│ │ │ ├── constants.py
│ │ │ ├── models.py
│ │ │ └── views.py
│ │ ├── core/
│ │ │ ├── migrations/
│ │ │ │ └── __init__.py
│ │ │ ├── static/
│ │ │ │ └── core/
│ │ │ ├── templates/
│ │ │ │ └── core/
│ │ │ ├── tests/
│ │ │ │ ├── __init__.py
│ │ │ │ └── factories.py
│ │ │ ├── __init__.py
│ │ │ ├── admin.py
│ │ │ ├── apps.py
│ │ │ ├── constants.py
│ │ │ ├── models.py
│ │ │ └── views.py
│ │ └── __init__.py
│ ├── locale/
│ ├── settings/
│ │ ├── __init__.py
│ │ ├── base.py
│ │ ├── local.py
│ │ ├── production.py
│ │ └── tests.py
│ ├── static/
│ ├── templates/
│ ├── asgi.py
│ ├── urls.py
│ └── wsgi.py
└── venv/
You can also explore the code on GitHub: django-production-template.
Zo installeer je Chrome OS op je (oude) computer [Laatste Artikelen - Webwereld]
Google timmert al jaren hard aan de weg met Chrome OS en brengt samen met verschillende computerfabrikanten Chrome-apparaten uit met dat besturingssysteem. Maar je hoeft niet per se een dedicated apparaat aan te schaffen, je kan het systeem ook zelf op je (oude) computer zetten en wij laten je zien hoe.
How to Use Chart.js with Django [Simple is Better Than Complex]
Chart.js is a cool open source JavaScript library that helps you render HTML5 charts. It is responsive and counts with 8 different chart types.
In this tutorial we are going to explore a little bit of how to make Django talk with Chart.js and render some simple charts based on data extracted from our models.
For this tutorial all you are going to do is add the Chart.js lib to your HTML page:
<script src="https://cdn.jsdelivr.net/npm/chart.js@2.9.3/dist/Chart.min.js"></script>
You can download it from Chart.js official website and use it locally, or you can use it from a CDN using the URL above.
I’m going to use the same example I used for the tutorial How to Create Group By Queries With Django ORM which is a good complement to this tutorial because actually the tricky part of working with charts is to transform the data so it can fit in a bar chart / line chart / etc.
We are going to use the two models below, Country
and City
:
class Country(models.Model):
name = models.CharField(max_length=30)
class City(models.Model):
name = models.CharField(max_length=30)
country = models.ForeignKey(Country, on_delete=models.CASCADE)
population = models.PositiveIntegerField()
And the raw data stored in the database:
cities | |||
---|---|---|---|
id | name | country_id | population |
1 | Tokyo | 28 | 36,923,000 |
2 | Shanghai | 13 | 34,000,000 |
3 | Jakarta | 19 | 30,000,000 |
4 | Seoul | 21 | 25,514,000 |
5 | Guangzhou | 13 | 25,000,000 |
6 | Beijing | 13 | 24,900,000 |
7 | Karachi | 22 | 24,300,000 |
8 | Shenzhen | 13 | 23,300,000 |
9 | Delhi | 25 | 21,753,486 |
10 | Mexico City | 24 | 21,339,781 |
11 | Lagos | 9 | 21,000,000 |
12 | São Paulo | 1 | 20,935,204 |
13 | Mumbai | 25 | 20,748,395 |
14 | New York City | 20 | 20,092,883 |
15 | Osaka | 28 | 19,342,000 |
16 | Wuhan | 13 | 19,000,000 |
17 | Chengdu | 13 | 18,100,000 |
18 | Dhaka | 4 | 17,151,925 |
19 | Chongqing | 13 | 17,000,000 |
20 | Tianjin | 13 | 15,400,000 |
21 | Kolkata | 25 | 14,617,882 |
22 | Tehran | 11 | 14,595,904 |
23 | Istanbul | 2 | 14,377,018 |
24 | London | 26 | 14,031,830 |
25 | Hangzhou | 13 | 13,400,000 |
26 | Los Angeles | 20 | 13,262,220 |
27 | Buenos Aires | 8 | 13,074,000 |
28 | Xi'an | 13 | 12,900,000 |
29 | Paris | 6 | 12,405,426 |
30 | Changzhou | 13 | 12,400,000 |
31 | Shantou | 13 | 12,000,000 |
32 | Rio de Janeiro | 1 | 11,973,505 |
33 | Manila | 18 | 11,855,975 |
34 | Nanjing | 13 | 11,700,000 |
35 | Rhine-Ruhr | 16 | 11,470,000 |
36 | Jinan | 13 | 11,000,000 |
37 | Bangalore | 25 | 10,576,167 |
38 | Harbin | 13 | 10,500,000 |
39 | Lima | 7 | 9,886,647 |
40 | Zhengzhou | 13 | 9,700,000 |
41 | Qingdao | 13 | 9,600,000 |
42 | Chicago | 20 | 9,554,598 |
43 | Nagoya | 28 | 9,107,000 |
44 | Chennai | 25 | 8,917,749 |
45 | Bangkok | 15 | 8,305,218 |
46 | Bogotá | 27 | 7,878,783 |
47 | Hyderabad | 25 | 7,749,334 |
48 | Shenyang | 13 | 7,700,000 |
49 | Wenzhou | 13 | 7,600,000 |
50 | Nanchang | 13 | 7,400,000 |
51 | Hong Kong | 13 | 7,298,600 |
52 | Taipei | 29 | 7,045,488 |
53 | Dallas–Fort Worth | 20 | 6,954,330 |
54 | Santiago | 14 | 6,683,852 |
55 | Luanda | 23 | 6,542,944 |
56 | Houston | 20 | 6,490,180 |
57 | Madrid | 17 | 6,378,297 |
58 | Ahmedabad | 25 | 6,352,254 |
59 | Toronto | 5 | 6,055,724 |
60 | Philadelphia | 20 | 6,051,170 |
61 | Washington, D.C. | 20 | 6,033,737 |
62 | Miami | 20 | 5,929,819 |
63 | Belo Horizonte | 1 | 5,767,414 |
64 | Atlanta | 20 | 5,614,323 |
65 | Singapore | 12 | 5,535,000 |
66 | Barcelona | 17 | 5,445,616 |
67 | Munich | 16 | 5,203,738 |
68 | Stuttgart | 16 | 5,200,000 |
69 | Ankara | 2 | 5,150,072 |
70 | Hamburg | 16 | 5,100,000 |
71 | Pune | 25 | 5,049,968 |
72 | Berlin | 16 | 5,005,216 |
73 | Guadalajara | 24 | 4,796,050 |
74 | Boston | 20 | 4,732,161 |
75 | Sydney | 10 | 5,000,500 |
76 | San Francisco | 20 | 4,594,060 |
77 | Surat | 25 | 4,585,367 |
78 | Phoenix | 20 | 4,489,109 |
79 | Monterrey | 24 | 4,477,614 |
80 | Inland Empire | 20 | 4,441,890 |
81 | Rome | 3 | 4,321,244 |
82 | Detroit | 20 | 4,296,611 |
83 | Milan | 3 | 4,267,946 |
84 | Melbourne | 10 | 4,650,000 |
countries | |
---|---|
id | name |
1 | Brazil |
2 | Turkey |
3 | Italy |
4 | Bangladesh |
5 | Canada |
6 | France |
7 | Peru |
8 | Argentina |
9 | Nigeria |
10 | Australia |
11 | Iran |
12 | Singapore |
13 | China |
14 | Chile |
15 | Thailand |
16 | Germany |
17 | Spain |
18 | Philippines |
19 | Indonesia |
20 | United States |
21 | South Korea |
22 | Pakistan |
23 | Angola |
24 | Mexico |
25 | India |
26 | United Kingdom |
27 | Colombia |
28 | Japan |
29 | Taiwan |
For the first example we are only going to retrieve the top 5 most populous cities and render it as a pie chart. In this strategy we are going to return the chart data as part of the view context and inject the results in the JavaScript code using the Django Template language.
views.py
from django.shortcuts import render
from mysite.core.models import City
def pie_chart(request):
labels = []
data = []
queryset = City.objects.order_by('-population')[:5]
for city in queryset:
labels.append(city.name)
data.append(city.population)
return render(request, 'pie_chart.html', {
'labels': labels,
'data': data,
})
Basically in the view above we are iterating through the City
queryset and building a list of labels
and a list of
data
. Here in this case the data
is the population count saved in the City
model.
For the urls.py
just a simple routing:
urls.py
from django.urls import path
from mysite.core import views
urlpatterns = [
path('pie-chart/', views.pie_chart, name='pie-chart'),
]
Now the template. I got a basic snippet from the Chart.js Pie Chart Documentation.
pie_chart.html
{% extends 'base.html' %}
{% block content %}
<div id="container" style="width: 75%;">
<canvas id="pie-chart"></canvas>
</div>
<script src="https://cdn.jsdelivr.net/npm/chart.js@2.9.3/dist/Chart.min.js"></script>
<script>
var config = {
type: 'pie',
data: {
datasets: [{
data: {{ data|safe }},
backgroundColor: [
'#696969', '#808080', '#A9A9A9', '#C0C0C0', '#D3D3D3'
],
label: 'Population'
}],
labels: {{ labels|safe }}
},
options: {
responsive: true
}
};
window.onload = function() {
var ctx = document.getElementById('pie-chart').getContext('2d');
window.myPie = new Chart(ctx, config);
};
</script>
{% endblock %}
In the example above the base.html
template is not important but you can see it in the code example I shared in the
end of this post.
This strategy is not ideal but works fine. The bad thing is that we are using the Django Template Language to interfere
with the JavaScript logic. When we put {{ data|safe}}
we are injecting a variable that came from
the server directly in the JavaScript code.
The code above looks like this:
As the title says, we are now going to render a bar chart using an async call.
views.py
from django.shortcuts import render
from django.db.models import Sum
from django.http import JsonResponse
from mysite.core.models import City
def home(request):
return render(request, 'home.html')
def population_chart(request):
labels = []
data = []
queryset = City.objects.values('country__name').annotate(country_population=Sum('population')).order_by('-country_population')
for entry in queryset:
labels.append(entry['country__name'])
data.append(entry['country_population'])
return JsonResponse(data={
'labels': labels,
'data': data,
})
So here we are using two views. The home
view would be the main page where the chart would be loaded at. The other
view population_chart
would be the one with the sole responsibility to aggregate the data the return a JSON response
with the labels and data.
If you are wondering about what this queryset is doing, it is grouping the cities by the country and aggregating the total population of each country. The result is going to be a list of country + total population. To learn more about this kind of query have a look on this post: How to Create Group By Queries With Django ORM
urls.py
from django.urls import path
from mysite.core import views
urlpatterns = [
path('', views.home, name='home'),
path('population-chart/', views.population_chart, name='population-chart'),
]
home.html
{% extends 'base.html' %}
{% block content %}
<div id="container" style="width: 75%;">
<canvas id="population-chart" data-url="{% url 'population-chart' %}"></canvas>
</div>
<script src="https://code.jquery.com/jquery-3.4.1.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/chart.js@2.9.3/dist/Chart.min.js"></script>
<script>
$(function () {
var $populationChart = $("#population-chart");
$.ajax({
url: $populationChart.data("url"),
success: function (data) {
var ctx = $populationChart[0].getContext("2d");
new Chart(ctx, {
type: 'bar',
data: {
labels: data.labels,
datasets: [{
label: 'Population',
backgroundColor: 'blue',
data: data.data
}]
},
options: {
responsive: true,
legend: {
position: 'top',
},
title: {
display: true,
text: 'Population Bar Chart'
}
}
});
}
});
});
</script>
{% endblock %}
Now we have a better separation of concerns. Looking at the chart container:
<canvas id="population-chart" data-url="{% url 'population-chart' %}"></canvas>
We added a reference to the URL that holds the chart rendering logic. Later on we are using it to execute the Ajax call.
var $populationChart = $("#population-chart");
$.ajax({
url: $populationChart.data("url"),
success: function (data) {
// ...
}
});
Inside the success
callback we then finally execute the Chart.js related code using the JsonResponse
data.
I hope this tutorial helped you to get started with working with charts using Chart.js. I published another tutorial on the same subject a while ago but using the Highcharts library. The approach is pretty much the same: How to Integrate Highcharts.js with Django.
If you want to grab the code I used in this tutorial you can find it here: github.com/sibtc/django-chartjs-example.
How to Save Extra Data to a Django REST Framework Serializer [Simple is Better Than Complex]
In this tutorial you are going to learn how to pass extra data to your serializer, before saving it to the database.
When using regular Django forms, there is this common pattern where we save the form with commit=False
and then pass
some extra data to the instance before saving it to the database, like this:
form = InvoiceForm(request.POST)
if form.is_valid():
invoice = form.save(commit=False)
invoice.user = request.user
invoice.save()
This is very useful because we can save the required information using only one database query and it also make it possible to handle not nullable columns that was not defined in the form.
To simulate this pattern using a Django REST Framework serializer you can do something like this:
serializer = InvoiceSerializer(data=request.data)
if serializer.is_valid():
serializer.save(user=request.user)
You can also pass several parameters at once:
serializer = InvoiceSerializer(data=request.data)
if serializer.is_valid():
serializer.save(user=request.user, date=timezone.now(), status='sent')
In this example I created an app named core
.
models.py
from django.contrib.auth.models import User
from django.db import models
class Invoice(models.Model):
SENT = 1
PAID = 2
VOID = 3
STATUS_CHOICES = (
(SENT, 'sent'),
(PAID, 'paid'),
(VOID, 'void'),
)
user = models.ForeignKey(User, on_delete=models.CASCADE, related_name='invoices')
number = models.CharField(max_length=30)
date = models.DateTimeField(auto_now_add=True)
status = models.PositiveSmallIntegerField(choices=STATUS_CHOICES)
amount = models.DecimalField(max_digits=10, decimal_places=2)
serializers.py
from rest_framework import serializers
from core.models import Invoice
class InvoiceSerializer(serializers.ModelSerializer):
class Meta:
model = Invoice
fields = ('number', 'amount')
views.py
from rest_framework import status
from rest_framework.response import Response
from rest_framework.views import APIView
from core.models import Invoice
from core.serializers import InvoiceSerializer
class InvoiceAPIView(APIView):
def post(self, request):
serializer = InvoiceSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
serializer.save(user=request.user, status=Invoice.SENT)
return Response(status=status.HTTP_201_CREATED)
Very similar example, using the same models.py and serializers.py as in the previous example.
views.py
from rest_framework.viewsets import ModelViewSet
from core.models import Invoice
from core.serializers import InvoiceSerializer
class InvoiceViewSet(ModelViewSet):
queryset = Invoice.objects.all()
serializer_class = InvoiceSerializer
def perform_create(self, serializer):
serializer.save(user=self.request.user, status=Invoice.SENT)
How to Use Date Picker with Django [Simple is Better Than Complex]
In this tutorial we are going to explore three date/datetime pickers options that you can easily use in a Django project. We are going to explore how to do it manually first, then how to set up a custom widget and finally how to use a third-party Django app with support to datetime pickers.
The implementation of a date picker is mostly done on the front-end.
The key part of the implementation is to assure Django will receive the date input value in the correct format, and also that Django will be able to reproduce the format when rendering a form with initial data.
We can also use custom widgets to provide a deeper integration between the front-end and back-end and also to promote better reuse throughout a project.
In the next sections we are going to explore following date pickers:
Tempus Dominus Bootstrap 4 Docs Source
XDSoft DateTimePicker Docs Source
Fengyuan Chen’s Datepicker Docs Source
This is a great JavaScript library and it integrate well with Bootstrap 4. The downside is that it requires moment.js and sort of need Font-Awesome for the icons.
It only make sense to use this library with you are already using Bootstrap 4 + jQuery, otherwise the list of CSS and JS may look a little bit overwhelming.
To install it you can use their CDN or download the latest release from their GitHub Releases page.
If you downloaded the code from the releases page, grab the processed code from the build/ folder.
Below, a static HTML example of the datepicker:
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<title>Static Example</title>
<!-- Bootstrap 4 -->
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.2.1/css/bootstrap.min.css" integrity="sha384-GJzZqFGwb1QTTN6wy59ffF1BuGJpLSa9DkKMp0DgiMDm4iYMj70gZWKYbI706tWS" crossorigin="anonymous">
<script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.6/umd/popper.min.js" integrity="sha384-wHAiFfRlMFy6i5SRaxvfOCifBUQy1xHdJ/yoi7FRNXMRBu5WHdZYu1hA6ZOblgut" crossorigin="anonymous"></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.2.1/js/bootstrap.min.js" integrity="sha384-B0UglyR+jN6CkvvICOB2joaf5I4l3gm9GU6Hc1og6Ls7i6U/mkkaduKaBhlAXv9k" crossorigin="anonymous"></script>
<!-- Font Awesome -->
<link href="https://stackpath.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css" rel="stylesheet" integrity="sha384-wvfXpqpZZVQGK6TAh5PVlGOfQNHSoD2xbE+QkPxCAFlNEevoEH3Sl0sibVcOQVnN" crossorigin="anonymous">
<!-- Moment.js -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.23.0/moment.min.js" integrity="sha256-VBLiveTKyUZMEzJd6z2mhfxIqz3ZATCuVMawPZGzIfA=" crossorigin="anonymous"></script>
<!-- Tempus Dominus Bootstrap 4 -->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/tempusdominus-bootstrap-4/5.1.2/css/tempusdominus-bootstrap-4.min.css" integrity="sha256-XPTBwC3SBoWHSmKasAk01c08M6sIA5gF5+sRxqak2Qs=" crossorigin="anonymous" />
<script src="https://cdnjs.cloudflare.com/ajax/libs/tempusdominus-bootstrap-4/5.1.2/js/tempusdominus-bootstrap-4.min.js" integrity="sha256-z0oKYg6xiLq3yJGsp/LsY9XykbweQlHl42jHv2XTBz4=" crossorigin="anonymous"></script>
</head>
<body>
<div class="input-group date" id="datetimepicker1" data-target-input="nearest">
<input type="text" class="form-control datetimepicker-input" data-target="#datetimepicker1"/>
<div class="input-group-append" data-target="#datetimepicker1" data-toggle="datetimepicker">
<div class="input-group-text"><i class="fa fa-calendar"></i></div>
</div>
</div>
<script>
$(function () {
$("#datetimepicker1").datetimepicker();
});
</script>
</body>
</html>
The challenge now is to have this input snippet integrated with a Django form.
forms.py
from django import forms
class DateForm(forms.Form):
date = forms.DateTimeField(
input_formats=['%d/%m/%Y %H:%M'],
widget=forms.DateTimeInput(attrs={
'class': 'form-control datetimepicker-input',
'data-target': '#datetimepicker1'
})
)
template
<div class="input-group date" id="datetimepicker1" data-target-input="nearest">
{{ form.date }}
<div class="input-group-append" data-target="#datetimepicker1" data-toggle="datetimepicker">
<div class="input-group-text"><i class="fa fa-calendar"></i></div>
</div>
</div>
<script>
$(function () {
$("#datetimepicker1").datetimepicker({
format: 'DD/MM/YYYY HH:mm',
});
});
</script>
The script tag can be placed anywhere because the snippet $(function () { ... });
will run the datetimepicker
initialization when the page is ready. The only requirement is that this script tag is placed after the jQuery script
tag.
You can create the widget in any app you want, here I’m going to consider we have a Django app named core.
core/widgets.py
from django.forms import DateTimeInput
class BootstrapDateTimePickerInput(DateTimeInput):
template_name = 'widgets/bootstrap_datetimepicker.html'
def get_context(self, name, value, attrs):
datetimepicker_id = 'datetimepicker_{name}'.format(name=name)
if attrs is None:
attrs = dict()
attrs['data-target'] = '#{id}'.format(id=datetimepicker_id)
attrs['class'] = 'form-control datetimepicker-input'
context = super().get_context(name, value, attrs)
context['widget']['datetimepicker_id'] = datetimepicker_id
return context
In the implementation above we generate a unique ID datetimepicker_id
and also include it in the widget context.
Then the front-end implementation is done inside the widget HTML snippet.
widgets/bootstrap_datetimepicker.html
<div class="input-group date" id="{{ widget.datetimepicker_id }}" data-target-input="nearest">
{% include "django/forms/widgets/input.html" %}
<div class="input-group-append" data-target="#{{ widget.datetimepicker_id }}" data-toggle="datetimepicker">
<div class="input-group-text"><i class="fa fa-calendar"></i></div>
</div>
</div>
<script>
$(function () {
$("#{{ widget.datetimepicker_id }}").datetimepicker({
format: 'DD/MM/YYYY HH:mm',
});
});
</script>
Note how we make use of the built-in django/forms/widgets/input.html
template.
Now the usage:
core/forms.py
from .widgets import BootstrapDateTimePickerInput
class DateForm(forms.Form):
date = forms.DateTimeField(
input_formats=['%d/%m/%Y %H:%M'],
widget=BootstrapDateTimePickerInput()
)
Now simply render the field:
template
{{ form.date }}
The good thing about having the widget is that your form could have several date fields using the widget and you could simply render the whole form like:
<form method="post">
{% csrf_token %}
{{ form.as_p }}
<input type="submit" value="Submit">
</form>
The XDSoft DateTimePicker is a very versatile date picker and doesn’t rely on moment.js or Bootstrap, although it looks good in a Bootstrap website.
It is easy to use and it is very straightforward.
You can download the source from GitHub releases page.
Below, a static example so you can see the minimum requirements and how all the pieces come together:
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<title>Static Example</title>
<!-- jQuery -->
<script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
<!-- XDSoft DateTimePicker -->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/jquery-datetimepicker/2.5.20/jquery.datetimepicker.min.css" integrity="sha256-DOS9W6NR+NFe1fUhEE0PGKY/fubbUCnOfTje2JMDw3Y=" crossorigin="anonymous" />
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery-datetimepicker/2.5.20/jquery.datetimepicker.full.min.js" integrity="sha256-FEqEelWI3WouFOo2VWP/uJfs1y8KJ++FLh2Lbqc8SJk=" crossorigin="anonymous"></script>
</head>
<body>
<input id="datetimepicker" type="text">
<script>
$(function () {
$("#datetimepicker").datetimepicker();
});
</script>
</body>
</html>
A basic integration with Django would look like this:
forms.py
from django import forms
class DateForm(forms.Form):
date = forms.DateTimeField(input_formats=['%d/%m/%Y %H:%M'])
Simple form, default widget, nothing special.
Now using it on the template:
template
{{ form.date }}
<script>
$(function () {
$("#id_date").datetimepicker({
format: 'd/m/Y H:i',
});
});
</script>
The id_date
is the default ID Django generates for the form fields (id_
+ name
).
core/widgets.py
from django.forms import DateTimeInput
class XDSoftDateTimePickerInput(DateTimeInput):
template_name = 'widgets/xdsoft_datetimepicker.html'
widgets/xdsoft_datetimepicker.html
{% include "django/forms/widgets/input.html" %}
<script>
$(function () {
$("input[name='{{ widget.name }}']").datetimepicker({
format: 'd/m/Y H:i',
});
});
</script>
To have a more generic implementation, this time we are selecting the field to initialize the component using its name instead of its id, should the user change the id prefix.
Now the usage:
core/forms.py
from django import forms
from .widgets import XDSoftDateTimePickerInput
class DateForm(forms.Form):
date = forms.DateTimeField(
input_formats=['%d/%m/%Y %H:%M'],
widget=XDSoftDateTimePickerInput()
)
template
{{ form.date }}
This is a very beautiful and minimalist date picker. Unfortunately there is no time support. But if you only need dates this is a great choice.
To install this datepicker you can either use their CDN or download the sources from their GitHub releases page. Please note that they do not provide a compiled/processed JavaScript files. But you can download those to your local machine using the CDN.
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<title>Static Example</title>
<style>body {font-family: Arial, sans-serif;}</style>
<!-- jQuery -->
<script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
<!-- Fengyuan Chen's Datepicker -->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/datepicker/0.6.5/datepicker.min.css" integrity="sha256-b88RdwbRJEzRx95nCuuva+hO5ExvXXnpX+78h8DjyOE=" crossorigin="anonymous" />
<script src="https://cdnjs.cloudflare.com/ajax/libs/datepicker/0.6.5/datepicker.min.js" integrity="sha256-/7FLTdzP6CfC1VBAj/rsp3Rinuuu9leMRGd354hvk0k=" crossorigin="anonymous"></script>
</head>
<body>
<input id="datepicker">
<script>
$(function () {
$("#datepicker").datepicker();
});
</script>
</body>
</html>
A basic integration with Django (note that we are now using DateField
instead of DateTimeField
):
forms.py
from django import forms
class DateForm(forms.Form):
date = forms.DateTimeField(input_formats=['%d/%m/%Y %H:%M'])
template
{{ form.date }}
<script>
$(function () {
$("#id_date").datepicker({
format:'dd/mm/yyyy',
});
});
</script>
core/widgets.py
from django.forms import DateInput
class FengyuanChenDatePickerInput(DateInput):
template_name = 'widgets/fengyuanchen_datepicker.html'
widgets/fengyuanchen_datepicker.html
{% include "django/forms/widgets/input.html" %}
<script>
$(function () {
$("input[name='{{ widget.name }}']").datepicker({
format:'dd/mm/yyyy',
});
});
</script>
Usage:
core/forms.py
from django import forms
from .widgets import FengyuanChenDatePickerInput
class DateForm(forms.Form):
date = forms.DateTimeField(
input_formats=['%d/%m/%Y %H:%M'],
widget=FengyuanChenDatePickerInput()
)
template
{{ form.date }}
The implementation is very similar no matter what date/datetime picker you are using. Hopefully this tutorial provided some insights on how to integrate this kind of frontend library to a Django project.
As always, the best source of information about each of those libraries are their official documentation.
I also created an example project to show the usage and implementation of the widgets for each of the libraries presented in this tutorial. Grab the source code at github.com/sibtc/django-datetimepicker-example.
How to Implement Grouped Model Choice Field [Simple is Better Than Complex]
The Django forms API have two field types to work with multiple options: ChoiceField
and ModelChoiceField
.
Both use select input as the default widget and they work in a similar way, except that ModelChoiceField
is designed
to handle QuerySets and work with foreign key relationships.
A basic implementation using a ChoiceField
would be:
class ExpenseForm(forms.Form):
CHOICES = (
(11, 'Credit Card'),
(12, 'Student Loans'),
(13, 'Taxes'),
(21, 'Books'),
(22, 'Games'),
(31, 'Groceries'),
(32, 'Restaurants'),
)
amount = forms.DecimalField()
date = forms.DateField()
category = forms.ChoiceField(choices=CHOICES)
You can also organize the choices in groups to generate the <optgroup>
tags like this:
class ExpenseForm(forms.Form):
CHOICES = (
('Debt', (
(11, 'Credit Card'),
(12, 'Student Loans'),
(13, 'Taxes'),
)),
('Entertainment', (
(21, 'Books'),
(22, 'Games'),
)),
('Everyday', (
(31, 'Groceries'),
(32, 'Restaurants'),
)),
)
amount = forms.DecimalField()
date = forms.DateField()
category = forms.ChoiceField(choices=CHOICES)
When you are using a ModelChoiceField
unfortunately there is no built-in solution.
Recently I found a nice solution on Django’s ticket tracker, where
someone proposed adding an opt_group
argument to the ModelChoiceField
.
While the discussion is still ongoing, Simon Charette proposed a really good solution.
Let’s see how we can integrate it in our project.
First consider the following models:
models.py
from django.db import models
class Category(models.Model):
name = models.CharField(max_length=30)
parent = models.ForeignKey('Category', on_delete=models.CASCADE, null=True)
def __str__(self):
return self.name
class Expense(models.Model):
amount = models.DecimalField(max_digits=10, decimal_places=2)
date = models.DateField()
category = models.ForeignKey(Category, on_delete=models.CASCADE)
def __str__(self):
return self.amount
So now our category instead of being a regular choices field it is now a model and the Expense
model have a
relationship with it using a foreign key.
If we create a ModelForm
using this model, the result will be very similar to our first example.
To simulate a grouped categories you will need the code below. First create a new module named fields.py:
fields.py
from functools import partial
from itertools import groupby
from operator import attrgetter
from django.forms.models import ModelChoiceIterator, ModelChoiceField
class GroupedModelChoiceIterator(ModelChoiceIterator):
def __init__(self, field, groupby):
self.groupby = groupby
super().__init__(field)
def __iter__(self):
if self.field.empty_label is not None:
yield ("", self.field.empty_label)
queryset = self.queryset
# Can't use iterator() when queryset uses prefetch_related()
if not queryset._prefetch_related_lookups:
queryset = queryset.iterator()
for group, objs in groupby(queryset, self.groupby):
yield (group, [self.choice(obj) for obj in objs])
class GroupedModelChoiceField(ModelChoiceField):
def __init__(self, *args, choices_groupby, **kwargs):
if isinstance(choices_groupby, str):
choices_groupby = attrgetter(choices_groupby)
elif not callable(choices_groupby):
raise TypeError('choices_groupby must either be a str or a callable accepting a single argument')
self.iterator = partial(GroupedModelChoiceIterator, groupby=choices_groupby)
super().__init__(*args, **kwargs)
And here is how you use it in your forms:
forms.py
from django import forms
from .fields import GroupedModelChoiceField
from .models import Category, Expense
class ExpenseForm(forms.ModelForm):
category = GroupedModelChoiceField(
queryset=Category.objects.exclude(parent=None),
choices_groupby='parent'
)
class Meta:
model = Expense
fields = ('amount', 'date', 'category')
Because in the example above I used a self-referencing relationship I had to add the exclude(parent=None)
to hide
the “group categories” from showing up in the select input as a valid option.
You can download the code used in this tutorial from GitHub: github.com/sibtc/django-grouped-choice-field-example
Credits to the solution Simon Charette on Django Ticket Track.
How to Use JWT Authentication with Django REST Framework [Simple is Better Than Complex]
JWT stand for JSON Web Token and it is an authentication strategy used by client/server applications where the client is a Web application using JavaScript and some frontend framework like Angular, React or VueJS.
In this tutorial we are going to explore the specifics of JWT authentication. If you want to learn more about Token-based authentication using Django REST Framework (DRF), or if you want to know how to start a new DRF project you can read this tutorial: How to Implement Token Authentication using Django REST Framework. The concepts are the same, we are just going to switch the authentication backend.
The JWT is just an authorization token that should be included in all requests:
curl http://127.0.0.1:8000/hello/ -H 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQzODI4NDMxLCJqdGkiOiI3ZjU5OTdiNzE1MGQ0NjU3OWRjMmI0OTE2NzA5N2U3YiIsInVzZXJfaWQiOjF9.Ju70kdcaHKn1Qaz8H42zrOYk0Jx9kIckTn9Xx7vhikY'
The JWT is acquired by exchanging an username + password for an access token and an refresh token.
The access token is usually short-lived (expires in 5 min or so, can be customized though).
The refresh token lives a little bit longer (expires in 24 hours, also customizable). It is comparable to an authentication session. After it expires, you need a full login with username + password again.
Why is that?
It’s a security feature and also it’s because the JWT holds a little bit more information. If you look closely the example I gave above, you will see the token is composed by three parts:
xxxxx.yyyyy.zzzzz
Those are three distinctive parts that compose a JWT:
header.payload.signature
So we have here:
header = eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9
payload = eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQzODI4NDMxLCJqdGkiOiI3ZjU5OTdiNzE1MGQ0NjU3OWRjMmI0OTE2NzA5N2U3YiIsInVzZXJfaWQiOjF9
signature = Ju70kdcaHKn1Qaz8H42zrOYk0Jx9kIckTn9Xx7vhikY
This information is encoded using Base64. If we decode, we will see something like this:
header
{
"typ": "JWT",
"alg": "HS256"
}
payload
{
"token_type": "access",
"exp": 1543828431,
"jti": "7f5997b7150d46579dc2b49167097e7b",
"user_id": 1
}
signature
The signature is issued by the JWT backend, using the header base64 + payload base64 + SECRET_KEY
. Upon each request
this signature is verified. If any information in the header or in the payload was changed by the client it will
invalidate the signature. The only way of checking and validating the signature is by using your application’s
SECRET_KEY
. Among other things, that’s why you should always keep your SECRET_KEY
secret!
For this tutorial we are going to use the djangorestframework_simplejwt
library, recommended by the DRF developers.
pip install djangorestframework_simplejwt
settings.py
REST_FRAMEWORK = {
'DEFAULT_AUTHENTICATION_CLASSES': [
'rest_framework_simplejwt.authentication.JWTAuthentication',
],
}
urls.py
from django.urls import path
from rest_framework_simplejwt import views as jwt_views
urlpatterns = [
# Your URLs...
path('api/token/', jwt_views.TokenObtainPairView.as_view(), name='token_obtain_pair'),
path('api/token/refresh/', jwt_views.TokenRefreshView.as_view(), name='token_refresh'),
]
For this tutorial I will use the following route and API view:
views.py
from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework.permissions import IsAuthenticated
class HelloView(APIView):
permission_classes = (IsAuthenticated,)
def get(self, request):
content = {'message': 'Hello, World!'}
return Response(content)
urls.py
from django.urls import path
from myapi.core import views
urlpatterns = [
path('hello/', views.HelloView.as_view(), name='hello'),
]
I will be using HTTPie to consume the API endpoints via the terminal. But you can also use cURL (readily available in many OS) to try things out locally.
Or alternatively, use the DRF web interface by accessing the endpoint URLs like this:
First step is to authenticate and obtain the token. The endpoint is /api/token/
and it only accepts POST requests.
http post http://127.0.0.1:8000/api/token/ username=vitor password=123
So basically your response body is the two tokens:
{
"access": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQ1MjI0MjU5LCJqdGkiOiIyYmQ1NjI3MmIzYjI0YjNmOGI1MjJlNThjMzdjMTdlMSIsInVzZXJfaWQiOjF9.D92tTuVi_YcNkJtiLGHtcn6tBcxLCBxz9FKD3qzhUg8",
"refresh": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoicmVmcmVzaCIsImV4cCI6MTU0NTMxMDM1OSwianRpIjoiMjk2ZDc1ZDA3Nzc2NDE0ZjkxYjhiOTY4MzI4NGRmOTUiLCJ1c2VyX2lkIjoxfQ.rA-mnGRg71NEW_ga0sJoaMODS5ABjE5HnxJDb0F8xAo"
}
After that you are going to store both the access token and the refresh token on the client side, usually in the localStorage.
In order to access the protected views on the backend (i.e., the API endpoints that require authentication), you should include the access token in the header of all requests, like this:
http http://127.0.0.1:8000/hello/ "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQ1MjI0MjAwLCJqdGkiOiJlMGQxZDY2MjE5ODc0ZTY3OWY0NjM0ZWU2NTQ2YTIwMCIsInVzZXJfaWQiOjF9.9eHat3CvRQYnb5EdcgYFzUyMobXzxlAVh_IAgqyvzCE"
You can use this access token for the next five minutes.
After five min, the token will expire, and if you try to access the view again, you are going to get the following error:
http http://127.0.0.1:8000/hello/ "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQ1MjI0MjAwLCJqdGkiOiJlMGQxZDY2MjE5ODc0ZTY3OWY0NjM0ZWU2NTQ2YTIwMCIsInVzZXJfaWQiOjF9.9eHat3CvRQYnb5EdcgYFzUyMobXzxlAVh_IAgqyvzCE"
To get a new access token, you should use the refresh token endpoint /api/token/refresh/
posting the
refresh token:
http post http://127.0.0.1:8000/api/token/refresh/ refresh=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoicmVmcmVzaCIsImV4cCI6MTU0NTMwODIyMiwianRpIjoiNzAyOGFlNjc0ZTdjNDZlMDlmMzUwYjg3MjU1NGUxODQiLCJ1c2VyX2lkIjoxfQ.Md8AO3dDrQBvWYWeZsd_A1J39z6b6HEwWIUZ7ilOiPE
The return is a new access token that you should use in the subsequent requests.
The refresh token is valid for the next 24 hours. When it finally expires too, the user will need to perform a full authentication again using their username and password to get a new set of access token + refresh token.
At first glance the refresh token may look pointless, but in fact it is necessary to make sure the user still have the correct permissions. If your access token have a long expire time, it may take longer to update the information associated with the token. That’s because the authentication check is done by cryptographic means, instead of querying the database and verifying the data. So some information is sort of cached.
There is also a security aspect, in a sense that the refresh token only travel in the POST data. And the access token is sent via HTTP header, which may be logged along the way. So this also give a short window, should your access token be compromised.
This should cover the basics on the backend implementation. It’s worth checking the djangorestframework_simplejwt settings for further customization and to get a better idea of what the library offers.
The implementation on the frontend depends on what framework/library you are using. Some libraries and articles covering popular frontend frameworks like angular/react/vue.js:
The code used in this tutorial is available at github.com/sibtc/drf-jwt-example.
Advanced Form Rendering with Django Crispy Forms [Simple is Better Than Complex]
[Django 2.1.3 / Python 3.6.5 / Bootstrap 4.1.3]
In this tutorial we are going to explore some of the Django Crispy Forms features to handle advanced/custom forms rendering. This blog post started as a discussion in our community forum, so I decided to compile the insights and solutions in a blog post to benefit a wider audience.
Table of Contents
Throughout this tutorial we are going to implement the following Bootstrap 4 form using Django APIs:
This was taken from Bootstrap 4 official documentation as an example of how to use form rows.
NOTE!
The examples below refer to a base.html
template. Consider the code below:
base.html
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous">
</head>
<body>
<div class="container">
{% block content %}
{% endblock %}
</div>
</body>
</html>
Install it using pip:
pip install django-crispy-forms
Add it to your INSTALLED_APPS
and select which styles to use:
settings.py
INSTALLED_APPS = [
...
'crispy_forms',
]
CRISPY_TEMPLATE_PACK = 'bootstrap4'
For detailed instructions about how to install django-crispy-forms
, please refer to this tutorial:
How to Use Bootstrap 4 Forms With Django
The Python code required to represent the form above is the following:
from django import forms
STATES = (
('', 'Choose...'),
('MG', 'Minas Gerais'),
('SP', 'Sao Paulo'),
('RJ', 'Rio de Janeiro')
)
class AddressForm(forms.Form):
email = forms.CharField(widget=forms.TextInput(attrs={'placeholder': 'Email'}))
password = forms.CharField(widget=forms.PasswordInput())
address_1 = forms.CharField(
label='Address',
widget=forms.TextInput(attrs={'placeholder': '1234 Main St'})
)
address_2 = forms.CharField(
widget=forms.TextInput(attrs={'placeholder': 'Apartment, studio, or floor'})
)
city = forms.CharField()
state = forms.ChoiceField(choices=STATES)
zip_code = forms.CharField(label='Zip')
check_me_out = forms.BooleanField(required=False)
In this case I’m using a regular Form
, but it could also be a ModelForm
based on a Django model with similar
fields. The state
field and the STATES
choices could be either a foreign key or anything else. Here I’m just using
a simple static example with three Brazilian states.
Template:
{% extends 'base.html' %}
{% block content %}
<form method="post">
{% csrf_token %}
<table>{{ form.as_table }}</table>
<button type="submit">Sign in</button>
</form>
{% endblock %}
Rendered HTML:
Rendered HTML with validation state:
Same form code as in the example before.
Template:
{% extends 'base.html' %}
{% load crispy_forms_tags %}
{% block content %}
<form method="post">
{% csrf_token %}
{{ form|crispy }}
<button type="submit" class="btn btn-primary">Sign in</button>
</form>
{% endblock %}
Rendered HTML:
Rendered HTML with validation state:
Same form code as in the first example.
Template:
{% extends 'base.html' %}
{% load crispy_forms_tags %}
{% block content %}
<form method="post">
{% csrf_token %}
<div class="form-row">
<div class="form-group col-md-6 mb-0">
{{ form.email|as_crispy_field }}
</div>
<div class="form-group col-md-6 mb-0">
{{ form.password|as_crispy_field }}
</div>
</div>
{{ form.address_1|as_crispy_field }}
{{ form.address_2|as_crispy_field }}
<div class="form-row">
<div class="form-group col-md-6 mb-0">
{{ form.city|as_crispy_field }}
</div>
<div class="form-group col-md-4 mb-0">
{{ form.state|as_crispy_field }}
</div>
<div class="form-group col-md-2 mb-0">
{{ form.zip_code|as_crispy_field }}
</div>
</div>
{{ form.check_me_out|as_crispy_field }}
<button type="submit" class="btn btn-primary">Sign in</button>
</form>
{% endblock %}
Rendered HTML:
Rendered HTML with validation state:
We could use the crispy forms layout helpers to achieve the same result as above. The implementation is done inside
the form __init__
method:
forms.py
from django import forms
from crispy_forms.helper import FormHelper
from crispy_forms.layout import Layout, Submit, Row, Column
STATES = (
('', 'Choose...'),
('MG', 'Minas Gerais'),
('SP', 'Sao Paulo'),
('RJ', 'Rio de Janeiro')
)
class AddressForm(forms.Form):
email = forms.CharField(widget=forms.TextInput(attrs={'placeholder': 'Email'}))
password = forms.CharField(widget=forms.PasswordInput())
address_1 = forms.CharField(
label='Address',
widget=forms.TextInput(attrs={'placeholder': '1234 Main St'})
)
address_2 = forms.CharField(
widget=forms.TextInput(attrs={'placeholder': 'Apartment, studio, or floor'})
)
city = forms.CharField()
state = forms.ChoiceField(choices=STATES)
zip_code = forms.CharField(label='Zip')
check_me_out = forms.BooleanField(required=False)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.helper = FormHelper()
self.helper.layout = Layout(
Row(
Column('email', css_class='form-group col-md-6 mb-0'),
Column('password', css_class='form-group col-md-6 mb-0'),
css_class='form-row'
),
'address_1',
'address_2',
Row(
Column('city', css_class='form-group col-md-6 mb-0'),
Column('state', css_class='form-group col-md-4 mb-0'),
Column('zip_code', css_class='form-group col-md-2 mb-0'),
css_class='form-row'
),
'check_me_out',
Submit('submit', 'Sign in')
)
The template implementation is very minimal:
{% extends 'base.html' %}
{% load crispy_forms_tags %}
{% block content %}
{% crispy form %}
{% endblock %}
The end result is the same.
Rendered HTML:
Rendered HTML with validation state:
You may also customize the field template and easily reuse throughout your application. Let’s say we want to use the custom Bootstrap 4 checkbox:
From the official documentation, the necessary HTML to output the input above:
<div class="custom-control custom-checkbox">
<input type="checkbox" class="custom-control-input" id="customCheck1">
<label class="custom-control-label" for="customCheck1">Check this custom checkbox</label>
</div>
Using the crispy forms API, we can create a new template for this custom field in our “templates” folder:
custom_checkbox.html
{% load crispy_forms_field %}
<div class="form-group">
<div class="custom-control custom-checkbox">
{% crispy_field field 'class' 'custom-control-input' %}
<label class="custom-control-label" for="{{ field.id_for_label }}">{{ field.label }}</label>
</div>
</div>
Now we can create a new crispy field, either in our forms.py module or in a new Python module named fields.py or something.
forms.py
from crispy_forms.layout import Field
class CustomCheckbox(Field):
template = 'custom_checkbox.html'
We can use it now in our form definition:
forms.py
class CustomFieldForm(AddressForm):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.helper = FormHelper()
self.helper.layout = Layout(
Row(
Column('email', css_class='form-group col-md-6 mb-0'),
Column('password', css_class='form-group col-md-6 mb-0'),
css_class='form-row'
),
'address_1',
'address_2',
Row(
Column('city', css_class='form-group col-md-6 mb-0'),
Column('state', css_class='form-group col-md-4 mb-0'),
Column('zip_code', css_class='form-group col-md-2 mb-0'),
css_class='form-row'
),
CustomCheckbox('check_me_out'), # <-- Here
Submit('submit', 'Sign in')
)
(PS: the AddressForm
was defined here and is the same as in the previous example.)
The end result:
There is much more Django Crispy Forms can do. Hopefully this tutorial gave you some extra insights on how to use the form helpers and layout classes. As always, the official documentation is the best source of information:
Django Crispy Forms layouts docs
Also, the code used in this tutorial is available on GitHub at github.com/sibtc/advanced-crispy-forms-examples.
How to Implement Token Authentication using Django REST Framework [Simple is Better Than Complex]
In this tutorial you are going to learn how to implement Token-based authentication using Django REST Framework (DRF). The token authentication works by exchanging username and password for a token that will be used in all subsequent requests so to identify the user on the server side.
The specifics of how the authentication is handled on the client side vary a lot depending on the technology/language/framework you are working with. The client could be a mobile application using iOS or Android. It could be a desktop application using Python or C++. It could be a Web application using PHP or Ruby.
But once you understand the overall process, it’s easier to find the necessary resources and documentation for your specific use case.
Token authentication is suitable for client-server applications, where the token is safely stored. You should never expose your token, as it would be (sort of) equivalent of a handing out your username and password.
Table of Contents
So let’s start from the very beginning. Install Django and DRF:
pip install django
pip install djangorestframework
Create a new Django project:
django-admin.py startproject myapi .
Navigate to the myapi folder:
cd myapi
Start a new app. I will call my app core:
django-admin.py startapp core
Here is what your project structure should look like:
myapi/
|-- core/
| |-- migrations/
| |-- __init__.py
| |-- admin.py
| |-- apps.py
| |-- models.py
| |-- tests.py
| +-- views.py
|-- __init__.py
|-- settings.py
|-- urls.py
+-- wsgi.py
manage.py
Add the core app (you created) and the rest_framework app (you installed) to the INSTALLED_APPS
, inside the
settings.py module:
myapi/settings.py
INSTALLED_APPS = [
# Django Apps
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
# Third-Party Apps
'rest_framework',
# Local Apps (Your project's apps)
'myapi.core',
]
Return to the project root (the folder where the manage.py script is), and migrate the database:
python manage.py migrate
Let’s create our first API view just to test things out:
myapi/core/views.py
from rest_framework.views import APIView
from rest_framework.response import Response
class HelloView(APIView):
def get(self, request):
content = {'message': 'Hello, World!'}
return Response(content)
Now register a path in the urls.py module:
myapi/urls.py
from django.urls import path
from myapi.core import views
urlpatterns = [
path('hello/', views.HelloView.as_view(), name='hello'),
]
So now we have an API with just one endpoint /hello/
that we can perform GET
requests. We can use the browser to
consume this endpoint, just by accessing the URL http://127.0.0.1:8000/hello/
:
We can also ask to receive the response as plain JSON data by passing the format
parameter in the querystring like
http://127.0.0.1:8000/hello/?format=json
:
Both methods are fine to try out a DRF API, but sometimes a command line tool is more handy as we can play more easily with the requests headers. You can use cURL, which is widely available on all major Linux/macOS distributions:
curl http://127.0.0.1:8000/hello/
But usually I prefer to use HTTPie, which is a pretty awesome Python command line tool:
http http://127.0.0.1:8000/hello/
Now let’s protect this API endpoint so we can implement the token authentication:
myapi/core/views.py
from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework.permissions import IsAuthenticated # <-- Here
class HelloView(APIView):
permission_classes = (IsAuthenticated,) # <-- And here
def get(self, request):
content = {'message': 'Hello, World!'}
return Response(content)
Try again to access the API endpoint:
http http://127.0.0.1:8000/hello/
And now we get an HTTP 403 Forbidden error. Now let’s implement the token authentication so we can access this endpoint.
We need to add two pieces of information in our settings.py module. First include rest_framework.authtoken to
your INSTALLED_APPS
and include the TokenAuthentication
to REST_FRAMEWORK
:
myapi/settings.py
INSTALLED_APPS = [
# Django Apps
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
# Third-Party Apps
'rest_framework',
'rest_framework.authtoken', # <-- Here
# Local Apps (Your project's apps)
'myapi.core',
]
REST_FRAMEWORK = {
'DEFAULT_AUTHENTICATION_CLASSES': [
'rest_framework.authentication.TokenAuthentication', # <-- And here
],
}
Migrate the database to create the table that will store the authentication tokens:
python manage.py migrate
Now we need a user account. Let’s just create one using the manage.py
command line utility:
python manage.py createsuperuser --username vitor --email vitor@example.com
The easiest way to generate a token, just for testing purpose, is using the command line utility again:
python manage.py drf_create_token vitor
This piece of information, the random string 9054f7aa9305e012b3c2300408c3dfdf390fcddf
is what we are going to use
next to authenticate.
But now that we have the TokenAuthentication
in place, let’s try to make another request to our /hello/
endpoint:
http http://127.0.0.1:8000/hello/
Notice how our API is now providing some extra information to the client on the required authentication method.
So finally, let’s use our token!
http http://127.0.0.1:8000/hello/ 'Authorization: Token 9054f7aa9305e012b3c2300408c3dfdf390fcddf'
And that’s pretty much it. For now on, on all subsequent request you should include the header Authorization: Token 9054f7aa9305e012b3c2300408c3dfdf390fcddf
.
The formatting looks weird and usually it is a point of confusion on how to set this header. It will depend on the client and how to set the HTTP request header.
For example, if we were using cURL, the command would be something like this:
curl http://127.0.0.1:8000/hello/ -H 'Authorization: Token 9054f7aa9305e012b3c2300408c3dfdf390fcddf'
Or if it was a Python requests call:
import requests
url = 'http://127.0.0.1:8000/hello/'
headers = {'Authorization': 'Token 9054f7aa9305e012b3c2300408c3dfdf390fcddf'}
r = requests.get(url, headers=headers)
Or if we were using Angular, you could implement an HttpInterceptor
and set a header:
import { Injectable } from '@angular/core';
import { HttpRequest, HttpHandler, HttpEvent, HttpInterceptor } from '@angular/common/http';
import { Observable } from 'rxjs';
@Injectable()
export class AuthInterceptor implements HttpInterceptor {
intercept(request: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> {
const user = JSON.parse(localStorage.getItem('user'));
if (user && user.token) {
request = request.clone({
setHeaders: {
Authorization: `Token ${user.accessToken}`
}
});
}
return next.handle(request);
}
}
The DRF provide an endpoint for the users to request an authentication token using their username and password.
Include the following route to the urls.py module:
myapi/urls.py
from django.urls import path
from rest_framework.authtoken.views import obtain_auth_token # <-- Here
from myapi.core import views
urlpatterns = [
path('hello/', views.HelloView.as_view(), name='hello'),
path('api-token-auth/', obtain_auth_token, name='api_token_auth'), # <-- And here
]
So now we have a brand new API endpoint, which is /api-token-auth/
. Let’s first inspect it:
http http://127.0.0.1:8000/api-token-auth/
It doesn’t handle GET requests. Basically it’s just a view to receive a POST request with username and password.
Let’s try again:
http post http://127.0.0.1:8000/api-token-auth/ username=vitor password=123
The response body is the token associated with this particular user. After this point you store this token and apply it to the future requests.
Then, again, the way you are going to make the POST request to the API depends on the language/framework you are using.
If this was an Angular client, you could store the token in the localStorage
, if this was a Desktop CLI application
you could store in a text file in the user’s home directory in a dot file.
Hopefully this tutorial provided some insights on how the token authentication works. I will try to follow up this tutorial providing some concrete examples of Angular applications, command line applications and Web clients as well.
It is important to note that the default Token implementation has some limitations such as only one token per user, no built-in way to set an expiry date to the token.
You can grab the code used in this tutorial at github.com/sibtc/drf-token-auth-example.
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m
Python GUI applicatie consistent backups met fsarchiver [linux blogs franz ulenaers]
Een
partitie van het type = "Linux LVM" kan gebruikt worden
voor logische volumen maar ook als "snapshot"
!
Een snapshot kan een exact kopie zijn van een logische
volume dat bevrozen is op een bepaald ogenblik : dit maakt het
mogelijk om consistente backups te maken van logische
volumen
terwijl de logische volumen in gebruik zijn !
Mijn fysische en logische volumen zijn als volgt aangemaakt :
fysische volume
pvcreate /dev/sda1
fysische volume groep
vgcreate mydell /dev/sda1
logische volumen
lvcreate -L 1G -n boot mydell
lvcreate -L 100G -n data mydell
lvcreate -L 50G -n home mydell
lvcreate -L 50G -n root mydell
lvcreate -L 1G swap mydell
LVM Logische volumen [linux blogs franz ulenaers]
Een
partitie van het type = "Linux LVM" kan gebruikt worden
voor logische volumen maar ook als "snapshot"
!
Een snapshot kan een exact kopie zijn van een logische
volume dat bevrozen is op een bepaald ogenblik : dit maakt het
mogelijk om consistente backups te maken van logische
volumen
terwijl de logische volumen in gebruik zijn !
Hoe installeren ?
sudo apt-get install lvm2
Creëer een fysisch volume voor een partitie
commando = ‘pvcreate’ partitie
voorbeeld :
partitie moet van het type = "Linux LVM" zijn !
pvcreate /dev/sda5
creëer een fysisch volume groep
vgcreate vg_storage partitie
voorbeeld
vgcreate mijnvg /dev/sda5
voeg een logische volume toe in een volume groep
lvcreate -L grootte_in_M/G -n logische_volume_naam volume_groep
voorbeeld :
lvcreate -L 30G -n mijnhome mijnvg
activeer een volume groep
vgchange -a y naam_volume_groep
voorbeeld :
vgchange -a y mijnvg
Mijn fysische en logische volumen
fysische volume
pvcreate /dev/sda1
fysische volume groep
vgcreate mydell /dev/sda1
logische volumen
lvcreate -L 1G -n boot mydell
lvcreate -L 100G -n data mydell
lvcreate -L 50G -n home mydell
lvcreate -L 50G -n root mydell
lvcreate -L 1G swap mydell
Logische volume vergroten/verkleinen
mijn home logische volume vergroten met 1 G
lvextend -L +1G /dev/mapper/mydell-home
let op een logische volume verkleinen kan leiden tot gegevens verlies indien er te weinig plaats is .... !
lvreduce -L -1G /dev/mapper/mydell-home
toon fysische volume
sudo pvs
worden getoond : PV fysische volume , VG volume groep , Fmt formaat (normaal = lvm2) , Attr attribuut, Psize groote PV, PFree vtije plaats
PV VG Fmt Attr PSize PFree
/dev/sda6 mydell lvm2 a-- 920,68g 500,63g
sudo pvs -a
sudo pvs /dev/sda6
Backup instellingen Logische volumen
zie bijgeleverde script LVM_bkup
toon volume groep
sudo vgs
VG #PV #LV #SN Attr VSize VFree
mydell 1 6 0 wz--n- 920,68g 500,63g
toon logische volume(n)
sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
boot mydell -wi-ao---- 952,00m
data mydell -wi-ao---- 100,00g
home mydell -wi-ao---- 93,13g
mintroot mydell -wi-a----- 101,00g
root mydell -wi-ao---- 94,06g
swap mydell -wi-ao---- 30,93g
hoe een logische volume wegdoen ?
een logische volume wegdoen kan enkel maar als de fysische volume niet actief is
dit kan met het vgchange commando
vgchange -a n mydell
lvremove /dev//mijn_volumegroup/naam_logische-volume
voorbeeld :
lvremove /dev/mydell/data
hoe een fysische volume wegdoen ?
vgreduce mydell /dev/sda1
Bijlagen: LVM_bkup (0.8 KLB)
hoe een stick mounten en umounten zonder root te zijn en met je eigen rwx rechten ! [linux blogs franz ulenaers]
hoe usb stick mounten en umounten zonder root te
zijn en met rwx rechten
?
---------------------------------------------------------------------------------------------------------
(hernoem
iedere ulefr01 naar je eigen gebruikersnaam!)
gebruik het 'fatlabel' commando om een volumenaam of label toe te kennen dit als je een vfat bestandensysteem gebruikt op je usb-stick
gebruik het commando 'tune2fs' voor een ext2,3,4
om een volumenaam stick32GB te maken op je usb_stick doe je met het commando :
sudo tune2fs -L stick32GB /dev/sdc1
noot : gebruik voor /dev/sdc1 hier het juiste device !
mogelijk na het mounten zie dmesg messages : Volume was not properly unmounted. Some data may be corrupt. Please run fsck.
gebruik de file system consistency check commando fsck om dit recht te zetten
doe een umount voordat je het commando fsck uitvoer ! (gebruik het juiste device !)
fsck /dev/sdc1
noot: gebruik voor /dev/sdc1 hier je device !
Steek je stick in een usb poort en umount je stick
sudo chown ulefr01:ulefr01 /media/ulefr01/ -R
zet acl op je ext2,3,4 stick (werkt niet op een vfat !)
setfacl -m u:ulefr01:rwx /media/ulefr01
met getfact kun je acl zien
getfacl /media/ulefr01
met het ls commando kun je het resultaat zien
ls /media/ulefr01 -dla
drwxrwx--- 5 ulefr01 ulefr01 4096 okt 1 18:40 /media/ulefr01
noot: indien de ‘+’ aanwezig is dan is acl reeds aanwezig, zoals op volgende lijn :
drwxrwx---+ 5 ulefr01 ulefr01 4096 okt 1 18:40 /media/ulefr01
Steek je stick in een usb poort en kijk of mounten automatisch gebeurd
check rechten van bestaande bestanden en mappen op je stick
ls * -la
indien root of andere rechten reeds aanwezig , herzetten met volgend commando
sudo chown ulefr01:ulefr01 /media/ulefr01/stick32GB -R
cd /media/ulefr01
mkdir mmcblk16G stick32GB stick16gb
voeg een lijn toe voor iedere stick
voorbeelden
LABEL=mmcblk16G /media/ulefr01/mmcblk16G ext4 user,exec,defaults,noatime,acl,noauto 0 0
LABEL=stick32GB /media/ulefr01/stick32GB ext4 user,exec,defaults,noatime,acl,noauto 0 0
LABEL=stick16gb /media/ulefr01/stick16gb vfat user,defaults,noauto 0 0
het volgende moet nu mogelijk zijn :
mount en umount zonder root te zijn
noot : je kunt de umount niet doen als de mount gedaan is door root ! Indien dat het geval is dan moet je eerst de umount met root ; daarna de mount als gebruiker dan kun je ook de umount doen .
zet een nieuw bestand op je stick zonder root te zijn
zet een nieuw map op je stick zonder root te zijn
check of je nieuwe bestanden kunt aanmaken zonder root te zijn
touch test
ls test -la
rm test
procedures MyCloud [linux blogs franz ulenaers]
Procedure lftpUlefr01Cloudupload wordt gebruikt om een upload te doen van bestanden en mappen naar MyCloud
Procedure lftpUlefr01Cloudmirror wordt gebruikt om wijzigingen terug te halen
Beide procedures maken gebruik van het programma lftp ( dit is "Sophisticated file transfer program" ) en worden gebruikt om synchronisatie van laptop en desktop toe te laten
Procedures werden aangepast zodat verborgen bestanden en verborgen mappen ook worden verwerkt ,
alsook werden voor mirror bepaalde meestal onveranderde bestanden en mappen uitgefilterd (--exclude) zodanig dat deze niet opnieuw worden verwerkt
op Cloud blijven ze bestaan als backup maar op de verschillende laptops niet (dit werd gedaan voor oudere mails van 2016 maanden 2016-11 en 2016-12
en voor alle vorige maanden (dit tot en met september) van 2017 !
zie bijlagen
Zet acl list [linux blogs franz ulenaers]
noot: meestal mogelijk op linux bestandsystemen : btrfs, ext2, ext3, ext4 en Reiserfs !
Hoe een acl zetten voor één gebruiker ?
setfacl -m u:ulefr01:rwx /home/ulefr01
noot: kies ipv ulefr01 hier je eigen gebruikersnaam
Hoe een acl afzetten ?
setfacl -x u:ulefr01 /home/ulefr01
Hoe een acl zetten voor twee of meer gebruikers ?
setfacl -m u:ulefr01:rwx /home/ulefr01
setfacl -m u:myriam:r-x /home/ulefr01
noot: kies ipv myriam je tweede gebruikersnaam; hier heeft myriam geen w write toegang maar wel r read en x exec !
Hoe een lijst opvragen van de ingestelde acl ?
getfacl home/ulefr01
getfacl: Voorafgaande '/' in absolute padnamen worden verwijderd # file: home/ulefr01 # owner: ulefr01 # group: ulefr01 user::rwx user:ulefr01:rwx user:myriam:r-x group::--- mask::rwx other::---
Hoe het resultaat nakijken ?
getfacl home/ulefr01
zie hierboven
ls /home/ulefr01 -dla
drwxrwx---+ ulefr01 ulefr01 4096 okt 1 18:40 /home/ulefr01
zie + sign !
python GUI applicatie tune2fs [linux blogs franz ulenaers]
Created woensdag 18 oktober 2017
geschreven met programmeertaal python met gebruik van Gtk+ 3
starten in terminal met : sudo python mytune2fs.py
ofwel python source compileren en starten met gecompileerde versie
Python GUI applicatie myarchive.py [linux blogs franz ulenaers]
Created vrijdag 13 oktober 2017
start in terminal mode met :
* sudo python myarchive.py
* sudo python myarchive2.py
ofwel door gecompileerde versie te maken en de gegeneerde objecten te starten
python myfsck.py [linux blogs franz ulenaers]
Created vrijdag 13 oktober 2017
zie bijgeleverd bestand myfsck.py
Deze applicatie kan devices mounten en umounten maar is hoofdzakelijk bedoeld om het fsck comando uit te voeren
Root rechten zijn nodig !
hulp ?
* starten in terminal mode
* sudo python myfsck.py
Het beste bestandensysteem (meest performant) op een USB stick , hoe opzetten ? [linux blogs franz ulenaers]
het beste bestandensysteem (meest performant) is ext4
hoe opzetten ?
mkfs.ext4 $device
zet eerst journal af
tune2fs -O ^has_journal $device
doe journaling alleen met data_writeback
tune2fs -o journal_data_writeback $device
gebruik geen reserved spaces en zet het op nul.
tune2fs -m 0 $device
voor bovenstaande 3 acties kan bijgeleverde bash script gebruikt worden :
bestand USBperf
# USBperfext4
echo 'USBperf'
echo '--------'
echo 'ext4 device ?'
read device
echo "device= $device"
echo 'ok ?'
read ok
if [ $ok == ' ' ] || [ $ok == 'n' ] || [ $ok == 'N' ]
then
echo 'nok - dus stoppen'
exit 1
fi
echo "doe : no journaling ! tune2fs -O ^has_journal $device"
tune2fs -O ^has_journal $device
echo "use data mode for filesystem as writeback doe : tune2fs -o journal_data $device"
tune2fs -o journal_data_writeback $device
echo "disable reserved space "
tune2fs -m 0 $device
echo 'gedaan !'
read ok
echo "device= $device"
exit 0
pas bestand /etc/fstab aan voor je USB
gebruik optie ‘noatime’
Maken dat een bestand niet te wijzigen , niet te hernoemen is niet te deleten is in linux ! [linux blogs franz ulenaers]
hoe : sudo chattr +i /data/Encrypt/.encfs6.xml
je kunt het bestand niet wijzigen, je kunt het bestand niet hernoemen, je kunt het bestand niet deleten zelfs als je root zijt
Backup laptop [linux blogs franz ulenaers]
Encryptie [linux blogs franz ulenaers]
Met encryptie kan men de gegevens op je computer beveiligen, door de gegevens onleesbaar maken voor de buitenwereld !
Hoe kan men een bestandssysteem encrypteren ?
installeer de volgende open source pakketten :
loop-aes-utils en cryptsetup
apt-get install loop-aes-utils
apt-get install cryptsetup
Hoe een beveiligd bestandsysteem aanmaken ?
Je kunt automatisch je bestandssysteem beschikbaar maken door een volgende entry in je /etc/fstab :
/home/cryptfile /mnt/crypt ext3 auto,encryption=aes,user,exec 0 0
....
Je kunt je encryptie afzetten dmv.Linken in Linux [linux blogs franz ulenaers]
Op Linux kan men bestanden meervoudige benamingen geven, zo kun je een bestand op verschillende plaatsen in de boomstructuur van de bestanden opslaan , zonder extra plaats op harde schijf in te nemen (+-).
Er zijn twee soorten links :
harde links
symbolische links
Een harde link maakt gebruik van hetzelfde bestandsnummer (inode).
Een harde link geldt niet voor een directory !
Een harde link moet op zelfde bestandssysteem en oorspronkelijk bestand moet bestaan !
Een symbolische link , het bestand krijgt een nieuw bestandsnummer , het bestand waarop verwezen wordt hoeft niet te bestaan.
Een symbolische link gaat ook voor een directory.
Het bestand linuxcursus is 4,2M groot, inode nr 293800.
Samsung Galaxy Z Flip, S20(+) en S20 Ultra Hands-on [Laatste Artikelen - Webwereld]
Samsung nodigde ons uit op de drie allernieuwste smartphones van dichtbij te bekijken. Daar maakten wij dankbaar gebruik van en wij delen onze bevindingen met je.
Hands-on: Synology Virtual Machine Manager [Laatste Artikelen - Webwereld]
Dat je NAS tegenwoordig voor veel meer dan alleen het opslaan van bestanden kan worden gebruikt is inmiddels bekend, maar wist je ook dat je er virtuele machines mee kan beheren? Wij leggen je uit hoe.
Wat je moet weten over FIDO-sleutels [Laatste Artikelen - Webwereld]
Dankzij de FIDO2-standaard is het mogelijk om zonder wachtwoord toch veilig in te loggen bij diverse online diensten. Onder meer Microsoft en Google bieden hier al opties voor. Dit jaar volgen er waarschijnlijk meer organisaties die dit aanbieden.
Zo gebruik je je iPhone zonder Apple ID [Laatste Artikelen - Webwereld]
Tegenwoordig moet je voor zo’n beetje alles wat je online wilt doen een account aanmaken, zelfs als je niet van plan bent online te werken of als je gewoon geen zin hebt om je gegevens te delen met de fabrikant. Wij laten je vandaag zien hoe je dat voor elkaar krijgt met je iPhone of iPad.
Groot lek in Internet Explorer wordt al misbruikt in het wild [Laatste Artikelen - Webwereld]
Er is een nieuwe zero-day-kwetsbaarheid ontdekt in Microsoft Internet Explorer. Het nieuwe lek wordt al misbruikt en een beveiligingsupdate is nog niet beschikbaar.
Zo installeer je Chrome-extensies in de nieuwe Edge [Laatste Artikelen - Webwereld]
De nieuwe versie van Edge is gebouwd met code van het Chromium-project, maar in de standaardconfiguratie worden extensies uitsluitend geïnstalleerd via de Microsoft Store. Dat is gelukkig vrij eenvoudig aan te passen.
Windows 10-upgrade nog steeds gratis [Laatste Artikelen - Webwereld]
Microsoft gaf gebruikers enkele jaren geleden de mogelijkheid gratis te upgraden van Windows 7 naar Windows 10. Daarbij ging het af en toe zo ver dat zelfs gebruikers die dat niet wilden een upgrade kregen. De aanbieding is al lang en breed voorbij, maar gratis upgraden is nog steeds mogelijk en het is nu makkelijker dan ooit. Wij vertellen je hoe je dat doet.
Chrome, Edge, Firefox: Welke browser is het snelst? [Laatste Artikelen - Webwereld]
Er is veel veranderd op de markt voor pc-browsers. Ongeveer vijf jaar geleden was er nog meer concurrentie en geheel eigen ontwikkeling, nu zijn er nog maar twee engines over: die achter Chrome en die achter Firefox. Met de release van de Blink-gebaseerde Edge van Microsoft deze maand kijken we naar benachmarks en praktijktests.
Cooler Master herontwerpt koelpasta-tubes wegens drugsverdenkingen [Laatste Artikelen - Webwereld]
Cooler Master heeft het uiterlijk van z’n koelpasta-spuiten aangepast omdat het bedrijf het naar eigen zeggen beu is om steeds te moeten uitleggen aan ouders dat de inhoud geen drugs is, maar koelpasta.
stick mounten zonder root , labels zetten , maak een bestandensysteem clean [ulefr01 - blog franz ulenaers]
Embedded Linux Engineer [Job Openings]
You're eager to work with Linux in an exciting environment. You have a lot of PC equipement experience. Prior experience with embedded Linux or small footprint distributions is considered a plus. Region East/West Flanders
We're looking for someone capable of teaching Linux and/or Solaris professionally. Ideally the candidate has experience with teaching in Linux, possibly other non-Windows OSes as well.
Kernel Developer [Job Openings]
We're looking for someone with kernel device driver developement experience. Preferably, but not necessary with knowledge of AV or TV devices.
C/C++ Developers [Job Openings]
We're searching Linux C/C++ Developers. Region Leuven.
Feed | RSS | Last fetched | Next fetched after |
---|---|---|---|
Computable | XML | 17-05-2022, 17:27 | 17-05-2022, 20:27 |
GNOMON | XML | 17-05-2022, 17:27 | 17-05-2022, 20:27 |
http://www.h-online.com/news/atom.xml | XML | 17-05-2022, 17:27 | 17-05-2022, 20:27 |
http://www.h-online.com/open/atom.xml | XML | 17-05-2022, 17:27 | 17-05-2022, 20:27 |
Job Openings | XML | 17-05-2022, 17:27 | 17-05-2022, 20:27 |
Laatste Artikelen - Webwereld | XML | 17-05-2022, 17:27 | 17-05-2022, 20:27 |
linux blogs franz ulenaers | XML | 17-05-2022, 17:27 | 17-05-2022, 20:27 |
Linux Journal - The Original Magazine of the Linux Community | XML | 17-05-2022, 17:27 | 17-05-2022, 20:27 |
Linux Today | XML | 17-05-2022, 17:27 | 17-05-2022, 20:27 |
OMG! Ubuntu! | XML | 17-05-2022, 17:27 | 17-05-2022, 20:27 |
Planet Python | XML | 17-05-2022, 17:27 | 17-05-2022, 20:27 |
Press Releases Archives - The Document Foundation Blog | XML | 17-05-2022, 17:27 | 17-05-2022, 20:27 |
Simple is Better Than Complex | XML | 17-05-2022, 17:27 | 17-05-2022, 20:27 |
Slashdot: Linux | XML | 17-05-2022, 17:27 | 17-05-2022, 20:27 |
Tech Drive-in | XML | 17-05-2022, 17:27 | 17-05-2022, 20:27 |
ulefr01 - blog franz ulenaers | XML | 17-05-2022, 17:27 | 17-05-2022, 20:27 |
If you peruse the archives of language-summit blogs, you’ll find that one theme comes up again and again: the dream of Python without the GIL. Continuing this venerable tradition, Sam Gross kicked off the 2022 Language Summit by giving the attendees an update on
nogil
, a project that took the Python community by storm when it was first announced in October 2021.The GIL, or “Global Interpreter Lock”, is the key feature of Python that prevents true concurrency between threads. This is another way of saying that it makes it difficult to do multiple tasks simultaneously while only running a single Python process. Previously the main cheerleader for removing the GIL was Larry Hastings, with his famous “Gilectomy” project. The Gilectomy project was ultimately abandoned due to the fact that it made single-threaded Python code significantly slower. But after seeing Gross’s proof-of-concept fork in October, Hastings wrote in an email to the python-dev mailing list:
The current status of
nogil
Since releasing his proof-of-concept fork in October – based on an alpha version of Python 3.9 – Gross stated that he’d been working to rebase the
nogil
changes onto 3.9.10.3.9 had been chosen as a target for now, as reaching a level of early adoption was important in order to judge whether the project as a whole would be viable. Early adopters would not be able to use the project effectively if third-party packages didn’t work when using
nogil
. There is still much broader support for Python 3.9 among third-party packages than for Python 3.10, and so Python 3.9 still made more sense as a base branch for now rather than 3.10 ormain
.Gross’s other update was that he had made a change in his approach with regard to thread safety. In order to make Python work effectively without the GIL, a lot of code needs to have new locks added to it in order to ensure that it is still thread-safe. Adding new locks to existing code, however, can be very difficult, as there is potential for large slowdowns in some areas. Gross’s solution had been to invent a new kind of lock, one that is “more Gilly”.
The proposal
Gross came to the Summit with a proposal: to introduce a new compiler flag in Python 3.12 that would disable the GIL.
This is a slight change to Gross’s initial proposal from October, where he brought up the idea of a runtime flag. A compiler flag, however, reduces the risk inherent in the proposal: “You have more of a way to back out.” Additionally, using a compiler flag avoids thorny issues concerning preservation of C ABI stability. “You can’t do it with a runtime flag,” Gross explained, “But there’s precedent for changing the ABI behind a compiler flag”.
Reception
Gross’s proposal was greeted with a mix of excitement and robust questioning from the assembled core developers.
Carol Willing queried whether it might make more sense for
nogil
to carry on as a separate fork of CPython, rather than for Gross to aim to merge his work into themain
branch of CPython itself. Gross, however, responded that this “was not a path to success”.Samuel Colvin, maintainer of the
pydantic
library, expressed disappointment that the new proposal was for a compiler flag, rather than a runtime flag. “I can’t help thinking that the level of adoption would be massively higher” if it was possible to change the setting from within Python, Colvin commented.There was some degree of disagreement as to what the path forward from here should be. Gross appeared to be seeking a high-level decision about whether
nogil
was a viable way forward. The core developers in attendance, however, were reluctant to give an answer without knowing the low-level costs. “We need to lay out a plan of how to proceed,” remarked Pablo Galindo Salgado. “Just creating a PR with 20,000 lines of code changed is infeasible.”Barry Warsaw and Itamar Ostricher both asked Gross about the impact
nogil
could have on third-party libraries if they wanted to support the new mode. Gross responded that the impact on many libraries would be minimal – no impact at all to a library likescikit-learn
, and perhaps only 15 lines of code fornumpy
. Gross had received considerable interest from scientific libraries, he said, so was confident that the pressure to build separate C extensions to supportnogil
mode would not be unduly burdensome. Carol Willing encouraged Gross to attend scientific-computing conferences, to gather more feedback from that community.There was also a large amount of concern from the attendees about the impact the introduction of
nogil
could have on CPython development. Some worried that introducingnogil
mode could mean that the number of tests run in CI would have to double. Others worried that the maintenance burden would significantly increase if two separate versions of CPython were supported simultaneously: one with the GIL, and one without.Overall, there was still a large amount of excitement and curiosity about
nogil
mode from the attendees. However, significant questions remain unresolved regarding the next steps for the project.