17-05-2022

17:27

Eerste 9 omscholers ronden Make IT Work van HAN af [Computable]

Negen deelnemers aan het traject Make IT Work van de Hogeschool van Arnhem en Nijmegen (HAN) ontvingen gisteren hun certificaat. Daarmee vormen zij de eerste lichting studenten van de HAN die met succes een omscholingstraject heeft...

Fransie Becker country manager Signpost Nederland [Computable]

Fransie Becker wordt de nieuwe country manager van Signpost in Nederland. Hij moet het bedrijf tegen volgend jaar doen groeien naar een twintigtal medewerkers.

Capgemini maakt Airbus ‘cloud-first’ [Computable]

Airbus kiest Capgemini om een cloud-first-transformatieprogramma te leveren voor de wereldwijde activiteiten van Commercial Aircraft and Helicopters. Als strategisch partner van Airbus zal Capgemini nu een volledig beheerde dienst van de kern-cloud-infrastructuur voor deze Airbus-activiteiten leveren.

Bedrijfsspionage bij Appian kost Pegasystems 2 miljard [Computable]

Pegasystems heeft naar het inzicht van een Amerikaanse jury bedrijfsspionage gepleegd bij Appian en moet ruim twee miljard dollar betalen. De plicht tot betaling is pas definitief als alle mogelijke beroepszaken zijn afgelopen.

Leidraad voor online-platforms in de maak [Computable]

De Autoriteit Consument en Markt (ACM) bereidt een leidraad voor die aangeeft welke informatie online-platforms moeten geven aan ondernemers die via deze platforms spullen of diensten verkopen aan consumenten.

Hyperion Lab strikt 9 ai- en hpc-startups [Computable]

Hyperion Lab, het innovatielab in Amsterdam Zuidoost, meldt negen nieuwe startups die zullen deelnemen aan de Hyperion Lab Showcase Program. Het programma van zes maanden is erop gericht Europese innovaties op het gebied van kunstmatige intelligentie...

Inkscape 1.2 is Now Available to Download [OMG! Ubuntu!]

Ahoy, a new version of Inkscape has appeared. We take a look at the key new features in Inkscape 1.2, the official release video, and share download links.

This post, Inkscape 1.2 is Now Available to Download is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

Scrivano is a New App to Take Handwritten Notes on Linux [OMG! Ubuntu!]

scrivano note taking app on ubuntu 22.04Scrivano is a new handwritten notes app for Linux. We look at Scrivano's features, which include a few terrific time-saving tools, and how to install it.

This post, Scrivano is a New App to Take Handwritten Notes on Linux is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

Inkscape 1.2 Released with Support for Multi-Page Documents, Numerous Enhancements [Linux Today]

Coming almost a year after Inkscape 1.1, the Inkscape 1.2 release is here to introduce a new Page tool that implements support for multiple pages in Inkscape documents. To access the new Page tool, click on the lowest button in the toolbar. The tool also lets you import and export multi-page PDF documents.

Also new in Inkscape 1.2 is a ‘Tiling’ Live Path Effect (LPE) that allows for interactive tiling, the ability to import SVG images from Open Clipart, Wikimedia Commons, and other online sources, on-canvas alignment snapping, as well as the ability to edit markers and dash patterns.

The post Inkscape 1.2 Released with Support for Multi-Page Documents, Numerous Enhancements appeared first on Linux Today.

How to Install Nginx, MariaDB, and PHP (LEMP) on Ubuntu 22.04 LTS [Linux Today]

LEMP is an acronym for a group of free and open-source software often used to serve web applications. It represents the configuration of Nginx Web Server, MySQL / MariaDB Database, and PHP Scripting Language on a Linux operating system.

This guide shows you step-by-step the installation process of the LEMP stack, Nginx, MariaDB, and PHP, in Ubuntu 22.04 LTS.

The post How to Install Nginx, MariaDB, and PHP (LEMP) on Ubuntu 22.04 LTS appeared first on Linux Today.

Alt Workstation K 10.0 Released [Linux Today]

The published release of the distribution kit “Alt Workstation K 10” is supplied with a graphical environment based on KDE Plasma. Its boot images are prepared for x86_64 architectures like HTTP, Yandex Mirror, Distrib Coffee, Infania Networks.

The post Alt Workstation K 10.0 Released appeared first on Linux Today.

How to Use Sed in Linux for Basic Shell Tasks [Linux Today]

Sed is a simple program. It does not create nor edit any files. Despite that, it is a powerful utility that can make your Linux life easier.

The post How to Use Sed in Linux for Basic Shell Tasks appeared first on Linux Today.

9to5Linux Weekly Roundup: May 15th, 2022 [Linux Today]

The week was really great for Linux news and releases. We got huge news from NVIDIA as they finally decided to open-source their graphics drivers, we got a new Fedora Linux release for you to play with on your PC, and we got a new generation of the Kubuntu Focus M2 Linux laptop with upgraded internals.

On top of that, I take a look at Fedora Media Writer 5.0, notify you about the upcoming end-of-life of Ubuntu 21.10 and LibreOffice 7.2, and give you the hands up about the latest distro and software releases. You can enjoy these and much more in 9to5Linux’s Linux Weekly Roundup for May 15th, 2022, below!

The post 9to5Linux Weekly Roundup: May 15th, 2022 appeared first on Linux Today.

How to Build and Install a Custom Kernel on Ubuntu [Linux Today]

Compiling your own custom Linux kernel allows you to extract the most of your hardware and software. Learn how to install one in Ubuntu today.

The post How to Build and Install a Custom Kernel on Ubuntu appeared first on Linux Today.

Top 10 Best Linux Distributions in 2022 For Everyone [Linux Today]

A list of best Linux Distributions in 2022 for every user – students, creators, developers and casual users with guidance to pick one.

The post Top 10 Best Linux Distributions in 2022 For Everyone appeared first on Linux Today.

NetworkManager 1.38 Released with IPv6, Other Improvements [Linux Today]

The NetworkManager 1.38 release is here to further improve IPv6 support and other key features. Learn more here.

The post NetworkManager 1.38 Released with IPv6, Other Improvements appeared first on Linux Today.

5 Tools to Easily Create a Custom Linux Distro [Linux Today]

If you want a Linux desktop that is tailored to your needs, your best option is to create a custom Linux distro. Here’s how you can do it.

The post 5 Tools to Easily Create a Custom Linux Distro appeared first on Linux Today.

Fedora 35 v Fedora 36: What’s the Difference? [Linux Today]

Fedora 36 is here, and it’s a significant upgrade. But how is it different from Fedora 35? Compare Fedora 35 v Fedora 36 now.

The post Fedora 35 v Fedora 36: What’s the Difference? appeared first on Linux Today.

Test and Code: 188: Python's Rich, Textual, and Textualize - Innovating the CLI [Planet Python]

Will McGugan has brought a lot of color to CLIs within Python due to Rich.
Then Textual started rethinking full command line applications, including layout with CSS.
And now Textualize, a new startup, is bringing CLI apps to the web.

Special Guest: Will McGugan.

Sponsored By:

Links:

<p>Will McGugan has brought a lot of color to CLIs within Python due to Rich. <br> Then Textual started rethinking full command line applications, including layout with CSS.<br> And now Textualize, a new startup, is bringing CLI apps to the web.</p><p>Special Guest: Will McGugan.</p><p>Sponsored By:</p><ul><li><a href="http://rollbar.com/testandcode" rel="nofollow">Rollbar</a>: <a href="http://rollbar.com/testandcode" rel="nofollow">With Rollbar, developers deploy better software faster.</a></li></ul><p>Links:</p><ul><li><a href="https://github.com/Textualize/rich" title="rich" rel="nofollow">rich</a></li><li><a href="https://github.com/Textualize/rich-cli" title="rich-cli" rel="nofollow">rich-cli</a></li><li><a href="https://github.com/Textualize/textual" title="textual" rel="nofollow">textual</a></li><li><a href="https://www.textualize.io/" title="Textualize.io" rel="nofollow">Textualize.io</a></li><li><a href="https://www.textualize.io/rich/gallery" title="Rich Gallery" rel="nofollow">Rich Gallery</a></li><li><a href="https://www.textualize.io/textual/gallery" title="Textualize Gallery" rel="nofollow">Textualize Gallery</a></li><li><a href="https://pythonbytes.fm/" title="Python Bytes Podcast" rel="nofollow">Python Bytes Podcast</a></li></ul>

Python Software Foundation: The 2022 Python Language Summit: Python without the GIL [Planet Python]

If you peruse the archives of language-summit blogs, you’ll find that one theme comes up again and again: the dream of Python without the GIL. Continuing this venerable tradition, Sam Gross kicked off the 2022 Language Summit by giving the attendees an update on nogil, a project that took the Python community by storm when it was first announced in October 2021.

The GIL, or “Global Interpreter Lock”, is the key feature of Python that prevents true concurrency between threads. This is another way of saying that it makes it difficult to do multiple tasks simultaneously while only running a single Python process. Previously the main cheerleader for removing the GIL was Larry Hastings, with his famous “Gilectomy” project. The Gilectomy project was ultimately abandoned due to the fact that it made single-threaded Python code significantly slower. But after seeing Gross’s proof-of-concept fork in October, Hastings wrote in an email to the python-dev mailing list:


Sam contacted me privately some time ago to pick my brain a little. But honestly, Sam didn’t need any helphe’d already taken the project further than I’d ever taken the Gilectomy.



The current status of nogil

Since releasing his proof-of-concept fork in October – based on an alpha version of Python 3.9 – Gross stated that he’d been working to rebase the nogil changes onto 3.9.10.

3.9 had been chosen as a target for now, as reaching a level of early adoption was important in order to judge whether the project as a whole would be viable. Early adopters would not be able to use the project effectively if third-party packages didn’t work when using nogil. There is still much broader support for Python 3.9 among third-party packages than for Python 3.10, and so Python 3.9 still made more sense as a base branch for now rather than 3.10 or main.

Gross’s other update was that he had made a change in his approach with regard to thread safety. In order to make Python work effectively without the GIL, a lot of code needs to have new locks added to it in order to ensure that it is still thread-safe. Adding new locks to existing code, however, can be very difficult, as there is potential for large slowdowns in some areas. Gross’s solution had been to invent a new kind of lock, one that is “more Gilly”.



The proposal

Gross came to the Summit with a proposal: to introduce a new compiler flag in Python 3.12 that would disable the GIL.

This is a slight change to Gross’s initial proposal from October, where he brought up the idea of a runtime flag. A compiler flag, however, reduces the risk inherent in the proposal: “You have more of a way to back out.” Additionally, using a compiler flag avoids thorny issues concerning preservation of C ABI stability. “You can’t do it with a runtime flag,” Gross explained, “But there’s precedent for changing the ABI behind a compiler flag”.



Reception

Gross’s proposal was greeted with a mix of excitement and robust questioning from the assembled core developers.

Carol Willing queried whether it might make more sense for nogil to carry on as a separate fork of CPython, rather than for Gross to aim to merge his work into the main branch of CPython itself. Gross, however, responded that this “was not a path to success”.

"A lot of the value of Python is the ecosystem, not just the language… CPython really leads the way in terms of the community moving as a block.

"Removing the GIL is a really transformative step. Most Python programs just don’t use threads at the moment if they want to run on multiple cores. If nogil is to be a success, the community as a whole has to buy into it."

– Sam Gross

Samuel Colvin, maintainer of the pydantic library, expressed disappointment that the new proposal was for a compiler flag, rather than a runtime flag. “I can’t help thinking that the level of adoption would be massively higher” if it was possible to change the setting from within Python, Colvin commented.

There was some degree of disagreement as to what the path forward from here should be. Gross appeared to be seeking a high-level decision about whether nogil was a viable way forward. The core developers in attendance, however, were reluctant to give an answer without knowing the low-level costs. “We need to lay out a plan of how to proceed,” remarked Pablo Galindo Salgado. “Just creating a PR with 20,000 lines of code changed is infeasible.”

Barry Warsaw and Itamar Ostricher both asked Gross about the impact nogil could have on third-party libraries if they wanted to support the new mode. Gross responded that the impact on many libraries would be minimal – no impact at all to a library like scikit-learn, and perhaps only 15 lines of code for numpy. Gross had received considerable interest from scientific libraries, he said, so was confident that the pressure to build separate C extensions to support nogil mode would not be unduly burdensome. Carol Willing encouraged Gross to attend scientific-computing conferences, to gather more feedback from that community.

There was also a large amount of concern from the attendees about the impact the introduction of nogil could have on CPython development. Some worried that introducing nogil mode could mean that the number of tests run in CI would have to double. Others worried that the maintenance burden would significantly increase if two separate versions of CPython were supported simultaneously: one with the GIL, and one without.

Overall, there was still a large amount of excitement and curiosity about nogil mode from the attendees. However, significant questions remain unresolved regarding the next steps for the project.

Hynek Schlawack: Better Python Object Serialization [Planet Python]

The Python standard library is full of underappreciated gems. One of them allows for simple and elegant function dispatching based on argument types. This makes it perfect for serialization of arbitrary objects – for example to JSON in web APIs and structured logs.

Andre Roberge: Python 🐍 fun with emojis [Planet Python]

At EuroSciPy in 2018, Marc Garcia gave a lightning talk which started by pointing out that scientific Python programmers like to alias everything, such as

import numpy as np
import pandas as pd

and suggested that they perhaps would prefer to use emojis, such as

import pandas as 🐼

However, Python does not support emojis as code, so the above line cannot be used.

A year prior, Thomas A Caswell had created a pull request for CPython that would have made this possible. This code would have allowed the use of emojis in all environments, including in a Python REPL and even in Jupyter notebooks. Unsurprisingly, this was rejected.

Undeterred, Geir Arne Hjelle created a project called pythonji (available on Pypi) which enabled the use of emojis in Python code, but in a much more restricted way. With pythonji, one can run modules ending with 🐍 instead of .py from a terminal. However, such modules cannot be imported, nor can emojis be used in a terminal.

When I learned about this attempt by Geir Arne Hjelle from a tweet by Mike Driscoll, I thought it would be a fun little project to implement with ideas.  Below, I use the same basic example included in the original pythonji project.


As you can see, it works in ideas' console, when importing module. It can also work when running the 🐍 file as source - but leaving the extension out.



And, it works in Jupyter notebooks too!


All of this without any need to modify CPython's source code!

😉


A. Jesse Jiryu Davis: Why Should Async Get All The Love?: Advanced Control Flow With Threads [Planet Python]

I spoke at PyCon 2022 about writing safe, elegant concurrent Python with threads. The video is coming soon; here’s a written version of the talk. asyncio. Asyncio is really hip. And not just asyncio—the older async frameworks like Twisted and Tornado, and more recent ones like Trio and Curio are hip, too. I think they deserve to be! I’m a big fan. I spent a lot of time contributing to Tornado and asyncio some years ago.

PyCon: PyCon US 2022 Recordings Update [Planet Python]

We understand that the PyCon US recordings are an incredibly important resource to the community. We were looking forward to providing the PyCon US 2022 recordings very soon after the event – especially since we know many of you weren’t able to attend this year’s conference in person. Regrettably, we have encountered some technical obstacles this year. We are working with our AV partners at the venue to resolve things as soon as possible.

Because of the ongoing pandemic, we were unable to work with our usual vendor for PyCon US conferences. They are based in Canada and understandably didn’t want to commit to travel to the US this year. This resulted in PyCon US contracting with a new AV vendor for the first time in many years. We were very thorough in providing details, but ultimately this was a new team doing work to new specifications.

The onsite AV team has provided an update on the technical issues as follows: “Some of the sessions are missing audio or graphics and are being worked through. There is a backup drive of all the content that has been mailed to the editing team to hopefully resolve those that are missing graphics and/or audio.” We remain hopeful that everyone’s sessions will eventually be posted with all audio and graphics intact, but it is going to take more time than we would like.

We hope the community understands the challenges in planning this year’s event and we greatly appreciate your support and patience as we work through this issue. Planning a safe, comfortable in-person event after two years of virtual added many additional pieces that took the PSF’s small staff time and effort to implement. We will continue to provide updates on the status of the recordings and will release an announcement once they are uploaded to the PyCon US YouTube Channel and available for viewing.

16-05-2022

20:25

Detacheerder gaat vaker 'binden' via vast contract [Computable]

Detacheerders gaan de komende tijd vaste contracten aanbieden om gedetacheerden aan zich te binden. Dat dienstverband moet helpen om krachten ‘vast te houden’ op de krappe arbeidsmarkt, stelt de branchevereniging voor detacheringsorganisaties.

ONLYOFFICE 7.1 is Out With New PDF Viewer, Slideshow Animations + More [OMG! Ubuntu!]

onlyoffice logoFans of open source office software are in for a treat as a brand new version of ONLYOFFICE is now available to download featuring various improvements.

This post, ONLYOFFICE 7.1 is Out With New PDF Viewer, Slideshow Animations + More is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

Kushal Das: OAuth Security Workshop 2022 [Planet Python]

Last week I attended OAuth Security Workshop at Trondheim, Norway. It was a 3-day single-track conference, where the first half of the days were pre-selected talks, and the second parts were unconference talks/side meetings. This was also my first proper conference after COVID emerged in the world.

osw starting

Back to the starting line

After many years felt the whole excitement of being a total newbie in something and suddenly being able to meet all the people behind the ideas. I reached the conference hotel in the afternoon of day 0 and met the organizers as they were in the lobby area. That chat went on for a long, and as more and more people kept checking into the hotel, I realized that it was a kind of reunion for many of the participants. Though a few of them met at a conference in California just a week ago, they all were excited to meet again.

To understand how welcoming any community is, just notice how the community behaves towards new folks. I think the Python community stands high in this regard. And I am very happy to say the whole OAuth/OIDC/Identity-related community folks are excellent in this regard. Even though I kept introducing myself as the new person in this identity land, not even for a single time I felt unwelcome. I attended OpenID-related working group meetings during the conference, multiple hallway chats, or talked to people while walking around the beautiful city. Everyone was happy to explain things in detail to me. Even though most of the people there have already spent 5-15+ years in the identity world.

The talks & meetings

What happens in Trondheim, stays in Trondheim.

I generally do not attend many talks at conferences, as they get recorded. But here, the conference was a single track, and also, there were no recordings.

The first talk was related to formal verification, and this was the first time I saw those (scary in my mind) maths on the big screen. But, full credit to the speakers as they explained things in such a way so that even an average programmer like me understood each step. And after this talk, we jumped into the world of OAuth/OpenID. One funny thing was whenever someone mentioned some RFC number, we found the authors inside the meeting room.

In the second half, we had the GNAP master class from Justin Richer. And once again, the speaker straightforwardly explained such deep technical details so that everyone in the room could understand it.

Now, in the evening before, a few times, people mentioned that in heated technical details, many RFC numbers will be thrown at each other, though it was not that many for me to get too scared :)

rfc count

I also managed to meet Roland for the first time. We had longer chats about the status of Python in the identity ecosystem and also about Identity Python. I took some notes about how we can improve the usage of Python in this, and I will most probably start writing about those in the coming weeks.

In multiple talks, researchers & people from the industry pointed out the mistakes made in the space from the security point of view. Even though, for many things, we have clear instructions in the SPECs, there is no guarantee that the implementors will follow them properly, thus causing security gaps.

At the end of day 1, we had a special Organ concert at the beautiful Trondheim Cathedral. On day 2, we had a special talk, “The Viking Kings of Norway”.

If you let me talk about my experience at the conference, I don’t think I will stop before 2 hours. It was so much excitement, new information, the whole feeling of going back into my starting days where I knew nothing much. Every discussion was full of learning opportunities (all discussions are anyway, but being a newbie is a different level of excitement) or the sadness of leaving Anwesha & Py back in Stockholm. This was the first time I was staying away from them after moving to Sweden.

surprise

Just before the conference ended, Aaron Parecki gave me a surprise gift. I spent time with it during the whole flight back to Stockholm.

This conference had the best food experience of my life for a conference. Starting from breakfast to lunch, big snack tables, dinners, or restaurant foods. In front of me, at least 4 people during the conference said, “oh, it feels like we are only eating and sometimes talking”.

Another thing I really loved to see is that the two primary conference organizers are university roommates who are continuing the friendship and journey in a very beautiful way. After midnight, standing outside of the hotel and talking about random things about life and just being able to see two longtime friends excited about similar things, it felt so nice.

Trondheim

I also want to thank the whole organizing team, including local organizers, Steinar, and the rest of the team did a superb job.

Python Morsels: Reading binary files in Python [Planet Python]

How can you read binary files in Python? And how can you read very large binary files in small chunks?

How to read a binary file in Python

If we try to read a zip file using the built-in open function in Python using the default read mode, we'll get an error:

>>> with open("exercises.zip") as zip_file:
...     contents = zip_file.read()
...
Traceback (most recent call last):
  File "<stdin>", line 2, in <module>
  File "/usr/lib/python3.10/codecs.py", line 322, in de
code
    (result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8e in position 11: invalid sta
rt byte
>>> with open("exercises.zip") as zip_file:
...     contents = zip_file.read()
...
Traceback (most recent call last):
  File "<stdin>", line 2, in <module>
  File "/usr/lib/python3.10/codecs.py", line 322, in de
code
    (result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8e in position 11: invalid sta
rt byte

We get an error because zip files aren't text files, they're binary files.

To read from a binary file, we need to open it with the mode rb instead of the default mode of rt:

>>> with open("exercises.zip", mode="rb") as zip_file:
...     contents = zip_file.read()
...
>>> with open("exercises.zip", mode="rb") as zip_file:
...     contents = zip_file.read()
...

When you read from a binary file, you won't get back strings. You'll get back a bytes object, also known as a byte string:

>>> with open("exercises.zip", mode="rb") as zip_file:
...     contents = zip_file.read()
...
>>> type(contents)
<class 'bytes'>
>>> contents[:20]
b'PK\x03\x04\n\x00\x00\x00\x00\x00Y\x8e\x84T\x00\x00\x00\x00\x00\x00'
>>> with open("exercises.zip", mode="rb") as zip_file:
...     contents = zip_file.read()
...
>>> type(contents)
<class 'bytes'>
>>> contents[:20]
b'PK\x03\x04\n\x00\x00\x00\x00\x00Y\x8e\x84T\x00\x00\x00\x00\x00\x00'

Byte strings don't have characters in them: they have bytes in them.

The bytes in a file won't help us very much unless we understand what they mean.

Use a library to read your binary file

You probably won't read a …

Read the full article: https://www.pythonmorsels.com/reading-binary-files-in-python/

Real Python: Linear Regression in Python [Planet Python]

You’re living in an era of large amounts of data, powerful computers, and artificial intelligence. This is just the beginning. Data science and machine learning are driving image recognition, development of autonomous vehicles, decisions in the financial and energy sectors, advances in medicine, the rise of social networks, and more. Linear regression is an important part of this.

Linear regression is one of the fundamental statistical and machine learning techniques. Whether you want to do statistics, machine learning, or scientific computing, there’s a good chance that you’ll need it. It’s best to build a solid foundation first and then proceed toward more complex methods.

By the end of this article, you’ll have learned:

  • What linear regression is
  • What linear regression is used for
  • How linear regression works
  • How to implement linear regression in Python, step by step

Free Bonus: Click here to get access to a free NumPy Resources Guide that points you to the best tutorials, videos, and books for improving your NumPy skills.

Regression

Regression analysis is one of the most important fields in statistics and machine learning. There are many regression methods available. Linear regression is one of them.

What Is Regression?

Regression searches for relationships among variables. For example, you can observe several employees of some company and try to understand how their salaries depend on their features, such as experience, education level, role, city of employment, and so on.

This is a regression problem where data related to each employee represents one observation. The presumption is that the experience, education, role, and city are the independent features, while the salary depends on them.

Similarly, you can try to establish the mathematical dependence of housing prices on area, number of bedrooms, distance to the city center, and so on.

Generally, in regression analysis, you consider some phenomenon of interest and have a number of observations. Each observation has two or more features. Following the assumption that at least one of the features depends on the others, you try to establish a relation among them.

In other words, you need to find a function that maps some features or variables to others sufficiently well.

The dependent features are called the dependent variables, outputs, or responses. The independent features are called the independent variables, inputs, regressors, or predictors.

Regression problems usually have one continuous and unbounded dependent variable. The inputs, however, can be continuous, discrete, or even categorical data such as gender, nationality, or brand.

It’s a common practice to denote the outputs with 𝑦 and the inputs with 𝑥. If there are two or more independent variables, then they can be represented as the vector 𝐱 = (𝑥₁, …, 𝑥ᵣ), where 𝑟 is the number of inputs.

When Do You Need Regression?

Typically, you need regression to answer whether and how some phenomenon influences the other or how several variables are related. For example, you can use it to determine if and to what extent experience or gender impacts salaries.

Regression is also useful when you want to forecast a response using a new set of predictors. For example, you could try to predict electricity consumption of a household for the next hour given the outdoor temperature, time of day, and number of residents in that household.

Regression is used in many different fields, including economics, computer science, and the social sciences. Its importance rises every day with the availability of large amounts of data and increased awareness of the practical value of data.

Linear Regression

Linear regression is probably one of the most important and widely used regression techniques. It’s among the simplest regression methods. One of its main advantages is the ease of interpreting results.

Problem Formulation

When implementing linear regression of some dependent variable 𝑦 on the set of independent variables 𝐱 = (𝑥₁, …, 𝑥ᵣ), where 𝑟 is the number of predictors, you assume a linear relationship between 𝑦 and 𝐱: 𝑦 = 𝛽₀ + 𝛽₁𝑥₁ + ⋯ + 𝛽ᵣ𝑥ᵣ + 𝜀. This equation is the regression equation. 𝛽₀, 𝛽₁, …, 𝛽ᵣ are the regression coefficients, and 𝜀 is the random error.

Linear regression calculates the estimators of the regression coefficients or simply the predicted weights, denoted with 𝑏₀, 𝑏₁, …, 𝑏ᵣ. These estimators define the estimated regression function 𝑓(𝐱) = 𝑏₀ + 𝑏₁𝑥₁ + ⋯ + 𝑏ᵣ𝑥ᵣ. This function should capture the dependencies between the inputs and output sufficiently well.

Read the full article at https://realpython.com/linear-regression-in-python/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Python for Beginners: Set Difference in Python [Planet Python]

Sets are used to store unique objects. Sometimes, we might need to find the elements in a set that are not present in another given set. For this, we use the set difference operation. In this article, we will discuss what is set difference is. We will also discuss approaches to find the set difference in python.

What is the Set Difference?

When we are given two sets A and B. The set difference (A-B) is a set consisting of all the elements that belong to A but are not present in set B. 

Similarly, the set difference (B-A) is a set consisting of all the elements that belong to B but are not present in set A. 

Consider the following sets.

A={1,2,3,4,5,6,7}

B={5,6,7,8,9,10,11}

Here, set A-B will contain the elements 1,2,3, and 4 as these elements are present in set A but do not belong to set B. Similarly, set B-A will contain the elements 8,9,10,11 as these elements are present in the set B but do not belong to set A .

Let us now discuss approaches to find set difference in python. 

How to Find The Set Difference in Python?

Given the sets A and B, if we want to find the the set difference A-B, we will first create an empty set named output_set. After that, we will traverse set A using a for loop. While traversal, we will check for each element if they are present in the set B or not. If any element in set A doesn’t belong to the set B, we will add the element to the output_set using the add() method. 

After execution of the for loop, we will get the set difference A-B in the output_set. You can observe this in the following example.

A = {1, 2, 3, 4, 5, 6, 7}
B = {5, 6, 7, 8, 9, 10, 11}
output_set = set()
for element in A:
    if element not in B:
        output_set.add(element)
print("The set A is:", A)
print("The set B is:", B)
print("The set A-B is:", output_set)

Output:

The set A is: {1, 2, 3, 4, 5, 6, 7}
The set B is: {5, 6, 7, 8, 9, 10, 11}
The set A-B is: {1, 2, 3, 4}

If we want to find the the set difference B-A, we will traverse set B using a for loop. While traversal, we will check for each element if they are present in the set A or not. If any element in set B doesn’t belong to the set A, we will add the element to the output_set using the add() method. 

After execution of the for loop, we will get the set difference B-A in the output_set. You can observe this in the following example.

A = {1, 2, 3, 4, 5, 6, 7}
B = {5, 6, 7, 8, 9, 10, 11}
output_set = set()
for element in B:
    if element not in A:
        output_set.add(element)
print("The set A is:", A)
print("The set B is:", B)
print("The set B-A is:", output_set)

Output:

The set A is: {1, 2, 3, 4, 5, 6, 7}
The set B is: {5, 6, 7, 8, 9, 10, 11}
The set B-A is: {8, 9, 10, 11}

Find Set Difference Using The difference() Method in Python

Python provides us with the difference() method to find the set difference. The difference() method, when invoked on set A, takes  set B as input argument, calculates the set difference, and returns a set containing the elements in the set (A-B). You can observe this in the following example.

A = {1, 2, 3, 4, 5, 6, 7}
B = {5, 6, 7, 8, 9, 10, 11}
output_set = A.difference(B)
print("The set A is:", A)
print("The set B is:", B)
print("The set A-B is:", output_set)

Output:

The set A is: {1, 2, 3, 4, 5, 6, 7}
The set B is: {5, 6, 7, 8, 9, 10, 11}
The set A-B is: {1, 2, 3, 4}

Conclusion

In this article, we have discussed how to find the set difference in python. To learn more about sets, you can read this article on set comprehension in python. You might also like this article on list comprehension in python.

The post Set Difference in Python appeared first on PythonForBeginners.com.

Mike Driscoll: PyDev of the Week: Raza (Rython) Zaidi [Planet Python]

This week we welcome Raza Zaidi (@razacodes) as our PyDev of the Week! Raza is a content creator on Twitter and YouTube. You can learn about Python, data science, Django, and more on Raza's YouTube channel. Check it out when you get a chance!

Now let's spend a few moments getting to know Raza better!

Can you tell us a little about yourself (hobbies, education, etc):

Hi I’m Raza, Head of Dev Rel at thirdweb. An accountant by profession, but technology enthusiast by heart. Wildly passionate about emerging technologies and educating about them. I consider myself a below average developer and the walking proof that anyone can learn how to develop. Currently I’m focused on teaching developers on how to get started in Web3 through his Twitter, Tiktok and YouTube channel. By no means do I think that Python is the best programming language out there. In my spare time, I love to binge-watch anime.

Why did you start using Python?

I used to be head of a Data engineering platform and honestly I thought all these devs were so cool. I just started to hang out with the devs and asked them to use me as a test bunny. I learned how to spin up environments and run basic Python scripts and that’s how I got started with Python.

What other programming languages do you know and which is your favorite?

I know a bit javascript and solidity. I like solidity, but nothing beats Python.

What projects are you working on now?

A couple. I think Python is tremendously underrepresented in the web3 space. There are so many cool libraries and I’m working on content to bring more awareness. Besides that I’m diving into the beginner space again, a platform to help more people get started with Python.  Stay tuned!

Which Python libraries are your favorite (core or 3rd party)?

  1. Pandas
  2. Turtle (I love drawing in Python!)
  3. thirdweb’s sdk

How did you decide to become a content creator?

I guess I look at myself as a really bad programmer and use that power to simplify concepts for myself to understand. Then I just share that information. So it wasn’t a conscious decision, I kind of rolled into it.

What challenges have you had as a content creator and how did you overcome them?

I guess finding new ideas and structure. I’m learning a lot by engaging in the community and I need to do that more. But that’s great way to get inspiration.


Is there anything else you’d like to say?

Please reach out if you also want to spread the message about Python. I’m looking for like minded devs who want to contribute to beginner content and to help people get started to develop in Python!

 

Thanks for doing the interview, Raza!

The post PyDev of the Week: Raza (Rython) Zaidi appeared first on Mouse Vs Python.

ListenData: Only size-1 arrays can be converted to Python scalars [Planet Python]

Numpy is one of the most used module in Python and it is used in a variety of tasks ranging from creating array to mathematical and statistical calculations. Numpy also bring efficiency in Python programming. While using numpy you may encounter this error TypeError: only size-1 arrays can be converted to Python scalars It is one of the frequently appearing error and sometimes it becomes a daunting challenge to solve it.

Solution : Only size-1 arrays can be converted to Python scalars

Meaning : Only Size 1 Arrays Can Be Converted To Python Scalars Error

This error generally appears when Python expects a single value but you passed an array which consists of multiple values. For example : you want to calculate exponential value of an array but the function for exponential value was designed for scalar variable (which means single value). When you pass numpy array in the function, it will return this error. This error handling is to prevent your code to process further and avoids unexpected output from the function later.

There are 5 method to solve this error

Solutions with examples

Create Reproducible Example

Let's understand the issue with an example. Suppose you have an array consisting of decimals values and your manager asked you to convert it into integer.
Let's create a numpy array having decimals (float)

import numpy as np
x = np.array([2, 3.5, 4, 5.3, 27])
Let's convert to integer values (without decimals)
np.int(x)
TypeError: only size-1 arrays can be converted to Python scalars
np.int() is deprecated alias so you can simply use int(x) but you will get the same error. It is because both np.int() and int(x) only accepts a single value not multiple values storing in an array. In other words you passed an array instead of scalar variable

Solution 1 : Using .astype() method

In order to convert a NumPy array of float values to integer values, we can instead use the following code:
x.astype(int)
Output
array([ 2,  3,  4,  5, 27])
3.5 and 5.3 from the original array has been converted to 3 and 5.

In order to reflect changes in x array, use the code below :

x = x.astype(int)
READ MORE »

11:34

We’re Off — Ubuntu 22.10 Daily Builds Available to Download [OMG! Ubuntu!]

Download the latest Ubuntu 22.10 daily build to help test the next version of Ubuntu as 'Kinetic Kudu' development kicks into gear and new features added.

This post, We’re Off — Ubuntu 22.10 Daily Builds Available to Download is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

"Morphex's Blogologue": JSON viewer for JSON database [Planet Python]

I was looking to get a little done on the ethereum-classic-taxman accounting tool today, and thought a bit outside-the-box, what could I need in there that isn't a direct priority.

The tool uses JSON databases, I switched a little while back because there could be security issues related to using a Python pickle database as the database backend.

An added benefit of using JSON is that its content is easy to view, and for example for debugging purposes, I thought it could be a good thing to have a tool that creates a view of the data that is easy to navigate and view. For example for debug purposes.

So I created this little script:

https://github.com/morphex/ethereum-classic-taxman/blob/main...

There are graphical JSON viewers on Ubuntu for example, but this little script can also have its output piped into a file, so that a database can be edited by hand in an editor. Or it could be piped to less on Linux/UNIX for viewing and searching.

On a related note, I saw some people lost their savings on the recent Luna/Terra crash. On the upside, I guess now is a chance to make a bet that the new variant with a massively higher amount of coins minted will succeed.

Podcast.__init__: Take Control Of Your Digital Photos By Running Your Own Smart Library Manager With LibrePhotos [Planet Python]

Digital cameras and the widespread availability of smartphones has allowed us all to generate massive libraries of personal photographs. Unfortunately, now we are all left to our own devices of how to manage them. While cloud services such as iPhotos and Google Photos are convenient, they aren't always affordable and they put your pictures under the control of large companies with their own agendas. LibrePhotos is an open source and self-hosted alternative to these services that puts you in control of your digital memories. In this episode the maintainer of LibrePhotos, Niaz Faridani-Rad, explains how he got involved with the project, the capabilities that it offers for managing your image library, and how to get your own instance set up to take back control of your pictures.

Summary

Digital cameras and the widespread availability of smartphones has allowed us all to generate massive libraries of personal photographs. Unfortunately, now we are all left to our own devices of how to manage them. While cloud services such as iPhotos and Google Photos are convenient, they aren’t always affordable and they put your pictures under the control of large companies with their own agendas. LibrePhotos is an open source and self-hosted alternative to these services that puts you in control of your digital memories. In this episode the maintainer of LibrePhotos, Niaz Faridani-Rad, explains how he got involved with the project, the capabilities that it offers for managing your image library, and how to get your own instance set up to take back control of your pictures.

Announcements

  • Hello and welcome to Podcast.__init__, the podcast about Python’s role in data and science.
  • When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With the launch of their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
  • This episode is sponsored by Mergify. It’s an amazing tool to make you and your team way more productive with GitHub. Mergify is all about leveling up your pull requests with useful features that eliminate busy work. Automatic merges allow you define the conditions for acceptance and Mergify will take care of merging the pull request as soon as it’s ready. Automatic updates take care of merging your pull requests serially on top of each other, so there is no way to introduce a regression. With a merge queue you can merge your urgent pull request first, organize your Prs as you wish and Mergify will merge them in that order. Mergify’s backports feature will even copy the pull request into another branch once the pull request has been merged, shipping your bug fixes on multiple branches automatically. By saving time you and your team can focus on projects that matter. Mergify is coordinated with any CI and fully integrated into GitHub. They have a Startup Program that offers a 12 months credit to leverage Mergify (up to $21,000 of value). Start saving time; visit pythonpodcast.com/mergify today to sign up for a demo and get started! Or just click the link in the show notes.
  • Your host as usual is Tobias Macey and today I’m interviewing Niaz Faridani-Rad about LibrePhotos, an open source, self-hosted application for managing your personal photo collection

Interview

  • Introductions
  • How did you get introduced to Python?
  • Can you describe what LibrePhotos is and the story behind it?
  • What are the core objectives of the project?
    • What kind of users are you focused on?
  • What are some of the major features of LibrePhotos?
  • There are a number of open source and commercial options for different photo oriented use cases. What are the main capabilities that influence someone’s decision to use one over the other?
  • Many people’s baseline expectations will be around services such as Google Photos or iPhotos. What are some of the challenges that you face in trying to provide a comparable experience?
    • One of the features that users rely on with these services is backup/disaster recovery of their photo library. What is the recommended approach for users of LibrePhotos?
  • Can you describe how LibrePhotos is architected?
    • How have the design and goals evolved since you first started working on it?
  • How have recent advances in machine learning algorithms and related tooling improved the availability and quality of advanced features in LibrePhotos?
    • How much improvement of accuracy in face/object recognition do you see as users invest in cataloging and organizing their collections?
    • Is there a minimum quantity of images/iindividual people that are necessary to start using the ML powered features?
  • What kinds of storage locations are supported?
  • What are the interfaces available for extending/enhancing/integrating with LibrePhotos?
  • What are the most interesting, innovative, or unexpected ways that you have seen LibrePhotos used?
  • What are the most interesting, unexpected, or challenging lessons that you have learned while working on LibrePhotos?
  • When is LibrePhotos the wrong choice?
  • What do you have planned for the future of LibrePhotos?

Keep In Touch

Picks

Closing Announcements

  • Thank you for listening! Don’t forget to check out our other show, the Data Engineering Podcast for the latest on modern data management.
  • Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
  • If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
  • To help other people find the show please leave a review on iTunes and tell your friends and co-workers

Links

The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA

Zato Blog: Integrating with Jira APIs [Planet Python]

Overview

Continuing in the series of articles about newest cloud connections in Zato 3.2, this episode covers Atlassian Jira from the perspective of invoking its APIs to build integrations between Jira and other systems.

There are essentially two use modes of integrations with Jira:

  1. Jira reacts to events taking place in your projects and invokes your endpoints accordingly via WebHooks. In this case, it is Jira that explicitly establishes connections with and sends requests to your APIs.
  2. Jira projects are queried periodically or as a consequence of events triggered by Jira using means other than WebHooks.
Jira integration types

The first case is usually more straightforward to conceptualize - you create a WebHook in Jira, point it to your endpoint and Jira invokes it when a situation of interest arises, e.g. a new ticket is opened or updated. I will talk about this variant of integrations with Jira in a future instalment as the current one is about the other situation, when it is your systems that establish connections with Jira.

The reason why it is more practical to first speak about the second form is that, even if WebHooks are somewhat easier to reason about, they do come with their own ramifications.

To start off, assuming that you use the cloud-based version of Jira (e.g. https://example.atlassian.net), you need to have a publicly available endpoint for Jira to invoke through WebHooks. Very often, this is undesirable because the systems that you need to integrate with may be internal ones, never meant to be exposed to public networks.

Secondly, your endpoints need to have a TLS certificate signed by a public Certificate Authority and they need to be accessible on port 443. Again, both of these are something that most enterprise systems will not allow at all or it may take months or years to process such a change internally across the various corporate departments involved.

Lastly, even if a WebHook can be used, it is not always a given that the initial information that you receive in the request from a WebHook will already contain everything that you need in your particular integration service. Thus, you will still need a way to issue requests to Jira to look up details of a particular object, such as tickets, in this way reducing WebHooks to the role of initial triggers of an interaction with Jira, e.g. a WebHook invokes your endpoint, you have a ticket ID on input and then you invoke Jira back anyway to obtain all the details that you actually need in your business integration.

The end situation is that, although WebHooks are a useful concept that I will write about in a future article, they may very well not be sufficient for many integration use cases. That is why I start with integration methods that are alternative to WebHooks.

Alternatives to WebHooks

If, in our case, we cannot use WebHooks then what next? Two good approaches are:

  1. Scheduled jobs
  2. Reacting to emails (via IMAP)

Scheduled jobs will let you periodically inquire with Jira about the changes that you have not processed yet. For instance, with a job definition as below:

Zato scheduler menu Zato scheduler, job creation form

Now, the service configured for this job will be invoked once per minute to carry out any integration works required. For instance, it can get a list of tickets since the last time it ran, process each of them as required in your business context and update a database with information about what has been just done - the database can be based on Redis, MongoDB, SQL or anything else.

Integrations built around scheduled jobs make most sense when you need to make periodic sweeps across a large swaths of business data, these are the “Give me everything that changed in the last period” kind of interactions when you do not know precisely how much data you are going to receive.

In the specific case of Jira tickets, though, an interesting alternative may be to combine scheduled jobs with IMAP connections:

Zato IMAP menu Zato IMAP, connection creation form

The idea here is that when new tickets are opened, or when updates are made to existing ones, Jira will send out notifications to specific email addresses and we can take advantage of it.

For instance, you can tell Jira to CC or BCC an address such as zato@example.com. Now, Zato will still run a scheduled job but instead of connecting with Jira directly, that job will look up unread emails for it inbox (“UNSEEN” per the relevant RFC).

Anything that is unread must be new since the last iteration which means that we can process each such email from the inbox, in this way guaranteeing that we process only the latest updates, dispensing with the need for our own database of tickets already processed. We can extract the ticket ID or other details from the email, look up its details in Jira and the continue as needed.

All the details of how to work with IMAP emails are provided in the documentation but it would boil down to this:

# -*- coding: utf-8 -*-

# Zato
from zato.server.service import Service

class MyService(Service):

    def handle(self):
        conn = self.email.imap.get('My Jira Inbox').conn

        for msg_id, msg in conn.get():

            # Process the message here ..
            process_message(msg.data)

            # .. and mark it as seen in IMAP.
            msg.mark_seen()

The natural question is - how would the “process_message” function extract details of a ticket from an email?

There are several ways:

  1. Each email has a subject of a fixed form - “[JIRA] (ABC-123) Here goes description”. In this case, ABC-123 is the ticket ID.
  2. Each email will contain a summary, such as the one below, which can also be parsed:
         Summary: Here goes description
             Key: ABC-123
             URL: https://example.atlassian.net/browse/ABC-123
         Project: My Project
      Issue Type: Improvement
Affects Versions: 1.3.17
     Environment: Production
        Reporter: Reporter Name
        Assignee: Assignee Name
  1. Finally, each email will have an “X-Atl-Mail-Meta” header with interesting metadata that can also be parsed and extracted:
X-Atl-Mail-Meta: user_id="123456:12d80508-dcd0-42a2-a2cd-c07f230030e5",
                 event_type="Issue Created",
                 tenant="https://example.atlassian.net"

The first option is the most straightforward and likely the most convenient one - simply parse out the ticket ID and call Jira with that ID on input for all the other information about the ticket. How to do it exactly is presented in the next chapter.

Regardless of how we parse the emails, the important part is that we know that we invoke Jira only when there are new or updated tickets - otherwise there would not have been any new emails to process. Moreover, because it is our side that invokes Jira, we do not expose our internal system to the public network directly.

However, from the perspective of the overall security architecture, email is still part of the attack surface so we need to make sure that we read and parse emails with that in view. In other words, regardless of whether it is Jira invoking us or our reading emails from Jira, all the usual security precautions regarding API integrations and accepting input from external resources, all that still holds and needs to be part of the design of the integration workflow.

Creating Jira connections

The above presented the ways in which we can arrive at the step of when we invoke Jira and now we are ready to actually do it.

As with other types of connections, Jira connections are created in Zato Dashboard, as below. Note that you use the email address of a user on whose behalf you connect to Jira but the only other credential is that user’s API token previously generated in Jira, not the user’s password.

Zato scheduler, job creation form Zato scheduler, job creation form Zato scheduler, job creation form

Invoking Jira

With a Jira connection in place, we can now create a Python API service. In this case, we accept a ticket ID on input (called “a key” in Jira) and we return a few details about the ticket to our caller.

This is the kind of a service that could be invoked from a service that is triggered by a scheduled job. That is, we would separate the tasks, one service would be responsible for opening IMAP inboxes and parsing emails and the one below would be responsible for communication with Jira.

Thanks to this loose coupling, we make everything much more reusable - that the services can be changed independently is but one part and the more important side is that, with such separation, both of them can be reused by future services as well, without tying them rigidly to this one integration alone.

# -*- coding: utf-8 -*-

# stdlib
from dataclasses import dataclass

# Zato
from zato.common.typing_ import cast_, dictnone
from zato.server.service import Model, Service

# ###########################################################################

if 0:
    from zato.server.connection.jira_ import JiraClient

# ###########################################################################

@dataclass(init=False)
class GetTicketDetailsRequest(Model):
    key: str

@dataclass(init=False)
class GetTicketDetailsResponse(Model):
    assigned_to: str = ''
    progress_info: dictnone = None

# ###########################################################################

class GetTicketDetails(Service):

    class SimpleIO:
        input  = GetTicketDetailsRequest
        output = GetTicketDetailsResponse

    def handle(self):

        # This is our input data
        input = self.request.input # type: GetTicketDetailsRequest

        # .. create a reference to our connection definition ..
        jira = self.cloud.jira['My Jira Connection']

        # .. obtain a client to Jira ..
        with jira.conn.client() as client: # type: JiraClient

            # Cast to enable code completion
            client = cast_('JiraClient', client)

            # Get details of a ticket (issue) from Jira
            ticket = client.get_issue(input.key)

        # Observe that ticket may be None (e.g. invalid key), hence this 'if' guard ..
        if ticket:

            # .. build a shortcut reference to all the fields in the ticket ..
            fields = ticket['fields']

            # .. build our response object ..
            response = GetTicketDetailsResponse()
            response.assigned_to = fields['assignee']['emailAddress']
            response.progress_info = fields['progress']

            # .. and return the response to our caller.
            self.response.payload = response

# ###########################################################################

Creating a REST channel and testing it

The last remaining part is a REST channel to invoke our service through. We will provide the ticket ID (key) on input and the service will reply with what was found in Jira for that ticket.

Zato scheduler, job creation form

We are now ready for the final step - we invoke the channel, which invokes the service which communicates with Jira, transforming the response from Jira to the output that we need:

$ curl localhost:17010/jira1 -d '{"key":"ABC-123"}'
{
    "assigned_to":"zato@example.com",
    "progress_info": {
        "progress": 10,
        "total": 30
    }
}
$

And this is everything for today - just remember that this is just one way of integrating with Jira. The other one, using WebHooks, is something that I will go into in one of the future articles.

Next steps

  • Start the tutorial to learn how to integrate APIs and build systems. After completing it, you will have a multi-protocol service representing a sample scenario often seen in banking systems with several applications cooperating to provide a single and consistent API to its callers.

  • Visit the support page if you need assistance.

  • Para aprender más sobre las integraciones de Zato y API en español, haga clic aquí.

  • Pour en savoir plus sur les intégrations API avec Zato en français, cliquez ici.

John Ludhi/nbshare.io: Save Pandas DataFrame as CSV file [Planet Python]

How To Save Pandas DataFrame As CSV File

To save Panda's DataFrame in to CSV or Excel file, use following commands...

  1. df.to_csv('data.csv', index=False)
  2. df.to_excel('data.xls', index=False)

In this notebook, we will learn about saving Pandas Dataframe in to a CSV file.

For this excercise we will use dummy data.

In [1]:
import pandas as pd

Let us first create a Python list of dictionaries where each dictionary contains information about a trading stock.

In [2]:
data = [{'tickr':'intc', 'price':45, 'no_of_employees':100000}, {'tickr':'amd', 'price':85, 'no_of_employees':20000}]

Let us first convert above list to Pandas DataFrame using pd.DataFrame method.

In [3]:
df = pd.DataFrame(data)

df is Pandas Dataframe. Let us print it.
To learn more about Pandas and Dataframes, checkout following notebooks...
https://www.nbshare.io/notebooks/pandas/

In [4]:
print(df)
  tickr  price  no_of_employees
0  intc     45           100000
1   amd     85            20000

we can save this data frame using df.to_csv method as shown below. Note the first argument in below command is the file name and second argument 'index=False' will restrict Pandas from inserting row (or index) numbers for each row.

In [5]:
df.to_csv('data.csv', index=False)

Above command shoulde create a 'data.csv' file in our current directory. Let us check that using 'ls' command.

In [6]:
ls -lrt data.csv
-rw-r--r-- 1 root root 56 May 15 00:40 data.csv

yes indeed the file is there. Let us check the contents of this file using Unix 'cat' command.
Note i am running this notebook on Linux machine that is why i am able to run these unix Commands from the Jupyter notebook.

In [7]:
cat data.csv
tickr,price,no_of_employees
intc,45,100000
amd,85,20000

As we see above, the content is comma separated list of values. Instead of comma, we can use any other separator using the "sep" argument.

In [11]:
df.to_csv('data.csv', index=False,sep="|")
In [12]:
cat data.csv
tickr|price|no_of_employees
intc|45|100000
amd|85|20000

Note: There are lot of options which df.to_csv can take. Checkout the complete list below...

df.to_csv(
path_or_buf: 'FilePathOrBuffer[AnyStr] | None' = None,
sep: 'str' = ',',
na_rep: 'str' = '',
float_format: 'str | None' = None,
columns: 'Sequence[Hashable] | None' = None,
header: 'bool_t | list[str]' = True,
index: 'bool_t' = True,
index_label: 'IndexLabel | None' = None,
mode: 'str' = 'w',
encoding: 'str | None' = None,
compression: 'CompressionOptions' = 'infer',
quoting: 'int | None' = None,
quotechar: 'str' = '"',
line_terminator: 'str | None' = None,
chunksize: 'int | None' = None,
date_format: 'str | None' = None,
doublequote: 'bool_t' = True,
escapechar: 'str | None' = None,
decimal: 'str' = '.',
errors: 'str' = 'strict',
storage_options: 'StorageOptions' = None,
) -> 'str | None'

Ned Batchelder: Cairo in Jupyter, better [Planet Python]

I finally came up with a way I like to create PyCairo drawings in a Jupyter notebook.

A few years ago I wrote here about how to draw Cairo SVG in a Jupyter notebook. That worked, but wasn’t as convenient as I wanted. Now I have a module that manages the PyCairo contexts for me. It automatically handles the displaying of SVG and PNG directly in the notebook, or lets me write them to a file.

The module is drawing.py.

The code looks like this (with a sample drawing copied from the PyCairo docs):

from drawing import cairo_context


def demo():
    with cairo_context(200, 200, format="svg") as context:
        x, y, x1, y1 = 0.1, 0.5, 0.4, 0.9
        x2, y2, x3, y3 = 0.6, 0.1, 0.9, 0.5
        context.scale(200, 200)
        context.set_line_width(0.04)
        context.move_to(x, y)
        context.curve_to(x1, y1, x2, y2, x3, y3)
        context.stroke()
        context.set_source_rgba(1, 0.2, 0.2, 0.6)
        context.set_line_width(0.02)
        context.move_to(x, y)
        context.line_to(x1, y1)
        context.move_to(x2, y2)
        context.line_to(x3, y3)
        context.stroke()
    return context

demo()

Using demo() in a notebook cell will draw the SVG. Nice.

The key to making this work is Jupyter’s special methods _repr_svg_, _repr_png_, and a little _repr_html_ thrown in also.

The code is at drawing.py. I created it so that I could play around with Truchet tiles:

A multi-scale Truchet tiling

Python Software Foundation: The 2022 Python Language Summit: Performance Improvements by the Faster CPython team [Planet Python]

Python 3.11, if you haven’t heard, is fast. Over the past year, Microsoft has funded a team – led by core developers Mark Shannon and Guido van Rossum – to work full-time on making CPython faster. With additional funding from Bloomberg, and help from a wide range of other contributors from the community, the results have borne fruit. On the pyperformance benchmarks at the time of the beta release, Python 3.11 was around 1.25x faster than Python 3.10, a phenomenal achievement.

But there is more still to be done. At the 2022 Python Language Summit, Mark Shannon presented on where the Faster CPython project aims to go next. The future’s fast.



The first problem Shannon raised was a problem of measurements. In order to know how to make Python faster, we need to know how slow Python is currently. But how slow at doing what, exactly?

Good benchmarks are vital for a project that aims to optimise Python for general usage. For that, the Faster CPython team needs the help of the community at large. The project “needs more benchmarks,” Shannon said – it needs to understand more precisely what the user base at large is using Python for, how they’re doing it, and what makes it slow at the moment (if it is slow!).

A benchmark, Shannon explained, is “just a program that we can time”. Anybody with a benchmark – or even just a suggestion for a benchmark! – that they believe is representative of a larger project they’re working on is invited to submit them to the issue tracker at the python/pyperformance repository on GitHub.



Nonetheless, the Faster CPython team has plenty to be getting on with in the meantime.

Much of the optimisation work in 3.11 has been achieved through the implementation of PEP 659, a “specializing adaptive interpreter”. The adaptive interpreter that Shannon and his team have introduced tracks individual bytecodes at various points in a program’s execution. When it spots an opportunity, a bytecode may be “quickened”: this means that a slow bytecode, that can do many things, is replaced by the interpreter with a more specialised bytecode that is very good at doing one specific thing. The work on PEP 659 has now largely been done, but major parts, such as dynamic specialisations of for-loops and binary operations, are still to be completed.

Shannon noted that Python also has essentially the same memory consumption in 3.11 as it did in 3.10. This is something he’d like to work on: a smaller memory overhead generally means fewer reference-counting operations in the virtual machine, a lower garbage-collection overhead, and smoother performance as a result of it all.

Another big remaining avenue for optimisations is the question of C extensions. CPython’s easy interface with C is its major advantage over other Python implementations such as PyPy, where incompatibilities with C extensions are one of the biggest hurdles for adoption by users. The optimisation work that has been done in CPython 3.11 has largely ignored the question of extension modules, but Shannon now wants to open up the possibility of exposing low-level function APIs to the virtual machine, reducing the overhead time of communicating between Python code and C code.



Is that a JIT I see on the horizon?

Lastly, but certainly not least, Shannon said, “everybody wants a JIT compiler… even if it doesn’t make sense yet”.

A JIT (“just-in-time”) compiler is the name given for a compiler that dynamically detects where performance bottlenecks exist in a program as the program is running. Once these bottlenecks have been identified, the JIT compiles these parts of the program on-the-fly into native machine code in order to speed things up. It’s a similar idea to Shannon’s PEP 659, but goes much further, since the specialising adaptive interpreter never goes beyond the bytecode level.

The idea of using a JIT compiler for Python is hardly new. PyPy’s JIT compiler is the major source of the large performance gains the project has over CPython in some areas. Third-party projects, such as pyjion and numba, bring just-in-time compilation to CPython that’s just a pip install away. Integrating a JIT into the core of CPython, however, would be materially different.

Shannon has historically voiced scepticism about the wisdom of introducing a JIT compiler into CPython itself, and said that work on introducing one is still some way off. A JIT, according to Shannon, will probably not arrive until 3.13 at the earliest, given the amount of lower-hanging fruit that is still to be worked on. The first step towards a JIT, he explained, would be to implement a trace interpreter, which would allow for better testing of concepts and lay the groundwork for future changes.



Playing nicely with the other Python projects

The gains Shannon’s team has achieved are hugely impressive, and likely to benefit the community as a whole in a profound way. But various problems lie on the horizon. Sam Gross’s proposal for a version of CPython without the Global Interpreter Lock (the nogil fork) has potential for speeding up multithreaded Python code in very different ways to the Faster CPython team’s work – but it could also be problematic for some of the optimisations that have already been implemented, many of which assume that the GIL exists. Eric Snow’s dream of achieving multiple subinterpreters within a single process, meanwhile, will have a smaller performance impact on single-threaded code compared to nogil, but could still create some minor complications for Shannon’s team.

15-05-2022

19:08

Juri Pakaste: Creating icons in Xcode playgrounds [Planet Python]

I'm no good at drawing. I have Affinity Designer and I like it well enough, but it requires more expertise than I have, really. Usually when I want to draw things, I prefer to retreat back to code.

Xcode playgrounds are pretty OK for writing your graphics code. Select your drawing technology of choice to create an image, create a view that displays it, make it the live view with PlaygroundPage.current.setLiveView and you're done. Well, almost. How do you get the image out of there?

Say you're creating icons for an iOS project. You want a bunch of variously sized versions of the same icon (I'm assuming here you aren't finessing the different versions too much, or otherwise you wouldn't be reading a tutorial on how to generate images in code), and you want to get them into an asset catalog in Xcode. Xcode's asset catalog editor can accept dragged files, so that seems like a something we could try enable.

SwiftUI makes it really easy.

Start with a function that draws the icon into a CGImage. This one just draws a purplish rectangle. It won't win any ADAs, but it'll serve for this tutorial:

func makeImage(size: CGSize) -> CGImage {
    let ctx = CGContext(
        data: nil,
        width: Int(size.width),
        height: Int(size.height),
        bitsPerComponent: 8,
        bytesPerRow: 0,
        space: CGColorSpaceCreateDeviceRGB(),
        bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue
    )!
    let rect = CGRect(origin: .zero, size: size)
    ctx.setFillColor(red: 0.9, green: 0.4, blue: 0.6, alpha: 1.0)
    ctx.fill(rect)
    let image = ctx.makeImage()!
    return image
}

Next define a bunch of values for the icon sizes that Xcode likes. As of Xcode 13 and iOS 15, something like this is a good representation of what you need:

enum IconSize: CGFloat {
    case phoneNotification = 20.0
    case phoneSettings = 29.0
    case phoneSpotlight = 40.0
    case phoneApp = 60.0
    case padApp = 76.0
    case padProApp = 83.5
}

extension IconSize: CustomStringConvertible {
    var description: String {
        switch self {
        case .phoneNotification: return "iPhone/iPad Notification (\(self.rawValue))"
        case .phoneSettings: return "iPhone/iPad Settings (\(self.rawValue))"
        case .phoneSpotlight: return "iPhone/iPad Spotlight (\(self.rawValue))"
        case .phoneApp: return "iPhone App (\(self.rawValue))"
        case .padApp: return "iPad App (\(self.rawValue))"
        case .padProApp: return "iPad Pro App (\(self.rawValue))"
        }
    }
}

Then define a struct that holds one extra bit of information: the scale we're working at.

struct IconVariant {
    let size: IconSize
    let scale: CGFloat

    var scaledSize: CGSize {
        let scaled = self.scale * self.size.rawValue
        return CGSize(width: scaled, height: scaled)
    }
}

extension IconVariant: CustomStringConvertible {
    var description: String { "\(self.size) @ \(self.scale)x" }
}

extension IconVariant: Identifiable {
    var id: String { self.description }
}

The descriptions are useful for you, the human; the Identifiable conformance will be helpful when you set up a SwiftUI view showing the variants.

Next define all the variants you want:

let icons: [IconVariant] = [
    IconVariant(size: .phoneNotification, scale: 2),
    IconVariant(size: .phoneNotification, scale: 3),
    IconVariant(size: .phoneSettings, scale: 2),
    IconVariant(size: .phoneSettings, scale: 3),
    IconVariant(size: .phoneSpotlight, scale: 2),
    IconVariant(size: .phoneSpotlight, scale: 3),
    IconVariant(size: .phoneApp, scale: 2),
    IconVariant(size: .phoneApp, scale: 3),
    IconVariant(size: .phoneNotification, scale: 1),
    IconVariant(size: .phoneSettings, scale: 1),
    IconVariant(size: .phoneSpotlight, scale: 1),
    IconVariant(size: .padApp, scale: 1),
    IconVariant(size: .padApp, scale: 2),
    IconVariant(size: .padProApp, scale: 2),
]

Then let's start work on getting those variants on screen. We'll use a simple SwiftUI view with stacks for it; it won't be pretty, but it'll do what's needed.

struct IconView: View {
    var body: some View {
        VStack {
            ForEach(icons) { icon in
                HStack {
                    let cgImage = makeImage(size: icon.scaledSize)
                    Text(String(describing: icon))
                    Image(cgImage, scale: 1.0, label: Text(String(describing: icon)))
                }
            }
        }
    }
}

PlaygroundPage.current.setLiveView(IconView())

As promised, functionality over form:

Screenshot of labeled squares

Now we need just the glue to enable dragging. Add a CGImage extension that makes it easier to export the image as PNG data:

extension CGImage {
    var png: Data? {
        guard let mutableData = CFDataCreateMutable(nil, 0),
              let destination = CGImageDestinationCreateWithData(mutableData, "public.png" as CFString, 1, nil)
        else { return nil }
        CGImageDestinationAddImage(destination, self, nil)
        guard CGImageDestinationFinalize(destination) else { return nil }
        return mutableData as Data
    }
}

To make the images in the view draggable, you'll need to use the onDrag view modifier. It requires a function that returns a NSItemProvider. The nicest way to create one is probably with a custom class that conforms to NSItemProviderWriting. Something like this:

final class IconProvider: NSObject, NSItemProviderWriting {
    struct UnrecognizedTypeIdentifierError: Error {
        let identifier: String
    }

    let image: CGImage

    init(image: CGImage) {
        self.image = image
    }

    func loadData(
        withTypeIdentifier typeIdentifier: String,
        forItemProviderCompletionHandler completionHandler: @escaping (Data?, Error?) -> Void
    ) -> Progress? {
        guard typeIdentifier == "public.png" else {
            completionHandler(nil, UnrecognizedTypeIdentifierError(identifier: typeIdentifier))
            return nil
        }
        completionHandler(self.image.png, nil)
        // Progress: all done in one step.
        let progress = Progress(parent: nil)
        progress.totalUnitCount = 1
        progress.completedUnitCount = 1
        return progress
    }

    static var writableTypeIdentifiersForItemProvider: [String] {
        ["public.png"]
    }
}

And then the last thing needed is the onDrag handler. Add it to the Image line in the IconView you created earlier.

Image(cgImage, scale: 1.0, label: Text(String(describing: icon)))
    .onDrag {
        NSItemProvider(object: IconProvider(image: cgImage))
    }

Refresh the playground preview and the images are waiting for you to drag them into the asset catalog.

Mirek Długosz: Announcing Kustosz [Planet Python]

I’m happy to announce Kustosz, a new feed reader that aims to help you focus on worthwhile content.

These days, many open source RSS readers still try to fill the void left out by Google Reader - their main goal is to provide familiar visuals and user experience. Meanwhile, proprietary feed readers incorporate machine learning techniques to guide you through the vast ocean of content and try to “relieve” you from the burden of “reading everything”.

I find both of these approaches problematic. Google Reader has been discontinued a decade ago. Everyone who wanted “a Google Reader alternative” has settled on something else or moved on a long time ago. There’s no reason for a new project to try to solve this exact problem.

While it’s tempting to save time and mental capacity by allowing computers to decide what content you should focus on, this is also a fast lane to filter bubbles. Today, we are intimately familiar with psychological, social, and political problems created by employing this approach at scale. It’s more important than ever to let people decide what they want to read.

That’s why Kustosz comes from a different angle. Its goal is to make it easy and convenient to read content that you find worthwhile.

Kustosz provides easy to use, straightforward and distraction-less interface, where your content takes a central place. It works straight in your browser, so you don’t have to install additional software on your computer. It fits your device, no matter if you use a phone, tablet, or desktop with an external monitor.

We are all busy, and sometimes we can’t read the entire article in one go. That’s why Kustosz tracks how much you have read and lets you pick up right where you left at any time, on any device.

To enable you to make the most use of Kustosz features, it automatically downloads full article content from the source website. It doesn’t matter if the feed author publishes only article lead, you don’t have to leave Kustosz unless you choose to.

Kustosz is not discouraged when the article you want to read is not in the site’s RSS feed. You can add any web page manually.

While Kustosz doesn’t make any decisions for you, it’s here to automate menial tasks. It has a built-in duplicate detector that can automatically hide articles you have already seen. It provides a flexible and powerful filter system that you can use to automatically hide articles you are not interested in.

And the best part is, your data is yours. Kustosz is open source and hosted on your server. This ensures that you are in control of Kustosz, its data and what it does.

From the technical point of view, Kustosz utilizes familiar modern client-server architecture. Frontend is Vue.js (v3) web application that relies heavily on features that became widely supported in recent years, like CSS grid or JavaScript Intersection Observer API.

Backend is Django web application that serves REST-like API with help of Django REST framework. Most of the work is done in background tasks managed by Celery. All the hard work of accessing and processing RSS / Atom feed files is done by the excellent reader library. I want to stress how immensely grateful I am to Adrian for creating and maintaining this exemplary piece of code.

I’ve been using Kustosz as my primary feed reader for about a month now, and I consider it pretty stable and fit for purpose. But it’s software, so of course it has bugs - especially in contexts distinctly different from mine. If you encounter them, feel free to create an issue or submit PR at GitHub.

As is true for most of the software, I don’t think Kustosz will ever be truly finished. Right now it’s primarily concerned with text content available on public websites, but my big dream is to support various content sources - things like email newsletters and social media sites immediately spring to mind. On the other hand, I don’t think I want to re-implement RSS Bridge just for a sake of it.

Another dream of mine is to provide an integrated notepad. Reading is great, but truly worthwhile articles are thought-provoking and invitation to the conversation. Active reading demands you write down what you understood. It would be great if you could do that from the convenience of a single application.

There’s also a little more mundane work to do - things like user interface translation framework, WebSub protocol support, and improving documentation.

Nonetheless, if Kustosz sounds like a tool you could use, please head on to Kustosz website or documentation page, where you will find system requirements and installation instructions. There’s also a container image you may use to quickly spin up an instance for testing.

Kay Hayen: Compile Python on Windows [Planet Python]

Looking to create an executable from Python script? Let me show you the full steps to achieve it on Windows.

Steps to create a Windows executable from a Python script using Nuitka

Step 1: Add Python to Windows Path

The simple way to add Python to the PATH do this is to check the box during installation of CPython. You just download python and install or modify Python by checking the box in the installer:

check modify PATH when you install python

This box is not enabled by default. You can also manually add the Python installation path to PATH environment variable.

Note

You do not strictly have to execute this step, you can also replace python with just the absolute path, e.g. C:\Users\YourName\AppData\Local\Programs\Python\Python310\python.exe but that can become inconvenient.

Step 2: Open a Windows Prompt

This can be cmd.exe or Windows Terminal, or from an IDE like Visual Code or PyCharm. And then type python to verify the correct installation, and exit to leave the Python prompt again.

Launch Python in Windows prompt to verify

Step 3: Install the Nuitka Python Compiler package

Now install Nuitka with the following command.

python -m pip install nuitka
Install Nuitka in Python

Step 4: Run your Program

Now run your program from the terminal. Convince yourself that everything is working.

python fancy-program.py

Note

If it’s a GUI program, make sure it has a .pyw suffix. That is going to make Python know it’s one.

Step 5: Create the Executable using Nuitka

python -m nuitka --onefile fancy-program.py

In case of a terminal program, add one of many options that Nuitka has to adapt for platform specifics, e.g. program icon, and so on.

python -m nuitka --onefile --windows-disable-console fancy-program.py

This will create fancy-program.exe.

Step 6: Run the Executable

Your executable should appear right near fancy-program.py and opening the explorer or running fancy-program.exe from the Terminal should be good.

fancy-program.exe

14-05-2022

20:35

Python Software Foundation: The 2022 Python Language Summit: Python in the browser [Planet Python]

Python can be run on many platforms: Linux, Windows, Apple Macs, microcomputers, and even Android devices. But it’s a widely known fact that, if you want code to run in a browser, Python is simply no good – you’ll just have to turn to JavaScript.

Now, however, that may be about to change. Over the course of the last two years, and following over 60 CPython pull requests (many attached to GitHub issue #84461), Core Developer Christian Heimes and contributor Ethan Smith have achieved a state where the CPython main branch can now be compiled to WebAssembly. This opens up the possibility of being able to run arbitrary Python programs clientside inside your web browser of choice.

At the 2022 Python Language Summit, Heimes gave a talk updating the attendees of the progress he’s made so far, and where the project hopes to go next.



WebAssembly basics

WebAssembly (or “WASM”, for short), Heimes explained, is a low-level assembly-like language that can be as fast as native machine code. Unlike your usual machine code, however, WebAssembly is independent from the machine it is running on. Instead, the core principle of WebAssembly is that it can be run anywhere, and can be run in a completely isolated environment. This leads to it being a language that is extremely fast, extremely portable, and provides minimal security risks – perfect for running clientside in a web browser.







After much work, CPython now cross-compiles to WebAssembly using emscripten through the --with-emscripten-target=browser flag. The CPython test suite now also passes on emscripten builds, and work is going towards adding a buildbot to CPython’s fleet of automatic robot testers, to ensure this work does not regress in the future.

Users who want to try out Python in the browser can try it out at https://repl.ethanhs.me/. The work opens up exciting possibilities of being able to run PyGame clientside and adding Jupyter bindings.







Support status

It should be noted that cross-compiling to WebAssembly is still highly experimental, and not yet officially supported by CPython. Several important modules in the Python standard library are not currently included in the bundled package produced when --with-emscripten-target=browser is specified, leading to a number of tests needing to be skipped in order for the test suite to pass.




Nonetheless, the future’s bright. Only a few days after Heimes’s talk, Peter Wang, CEO at Anaconda, announced the launch of PyScript in a PyCon keynote address. PyScript is a tool that allows Python to be called from within HTML, and to call JavaScript libraries from inside Python code – potentially enabling a website to be written entirely in Python.

PyScript is currently built on top of Pyodide, a third-party project bringing Python to the browser, on which work began before Heimes started his work on the CPython main branch. With Heimes’s modifications to Python 3.11, this effort will only become easier.

Python Software Foundation: The 2022 Python Language Summit: Lightning talks [Planet Python]

These were a series of short talks, each lasting around five minutes.


Read the rest of the 2022 Python Language Summit coverage here.



Lazy imports, with Carl Meyer

Carl Meyer, an engineer at Instagram, presented on a proposal that has since blossomed into PEP 690: lazy imports, a feature that has already been implemented in Cinder, Instagram’s performance-optimised fork of CPython 3.8.

What’s a lazy import? Meyer explained that the core difference with lazy imports is that the import does not happen until the imported object is referenced.

Examples

In the following Python module, spam.py, with lazy imports activated, the module eggs would never in fact be imported since eggs is never referenced after the import:


# spam.py import sys import eggs def main(): print("Doing some spammy things.") sys.exit(0) if __name__ == "__main__": main()


And in this Python module, ham.py, with lazy imports activated, the function bacon_function is imported – but only right at the end of the script, after we’ve completed a for-loop that’s taken a very long time to finish:


# ham.py
import sys
import time
from bacon import bacon_function def main(): for _ in range(1_000_000_000): print('Doing hammy things') time.sleep(1) bacon_function()
sys.exit(0) if __name__ == "__main__": main()


Meyer revealed that the Instagram team’s work on lazy imports had resulted in startup time improvements of up to 70%, memory usage improvements of up to 40%, and the elimination of almost all import cycles within their code base. (This last point will be music to the ears of anybody who has worked on a Python project larger than a few modules.)

Downsides

Meyer also laid out a number of costs to having lazy imports. Lazy imports create the risk that ImportError (or any other error resulting from an unsuccessful import) could potentially be raised… anywhere. Import side effects could also become “even less predictable than they already weren’t”.

Lastly, Meyer noted, “If you’re not careful, your code might implicitly start to require it”. In other words, you might unexpectedly reach a stage where – because your code has been using lazy imports – it now no longer runs without the feature enabled, because your code base has become a huge, tangled mess of cyclic imports.

Where next for lazy imports?

Python users who have opinions either for or against the proposal are encouraged to join the discussion on discuss.python.org.



Python-Dev versus Discourse, with Thomas Wouters

This was less of a talk, and more of an announcement.

Historically, if somebody wanted to make a significant change to CPython, they were required to post on the python-dev mailing list. The Steering Council now views the alternative venue for discussion, discuss.python.org, to be a superior forum in many respects.

Thomas Wouters, Core Developer and Steering Council member, said that the Steering Council was planning on loosening the requirements, stated in several places, that emails had to be sent to python-dev in order to make certain changes. Instead, they were hoping that discuss.python.org would become the authoritative discussion forum in the years to come.



Asks from Pyston, with Kevin Modzelewski

Kevin Modzelewski, core developer of the Pyston project, gave a short presentation on ways forward for CPython optimisations. Pyston is a performance-oriented fork of CPython 3.8.12.

Modzelewski argued that CPython needed better benchmarks; the existing benchmarks on pyperformance were “not great”. Modzelewski also warned that his “unsubstantiated hunch” was that the Faster CPython team had already accomplished “greater than one-half” of the optimisations that could be achieved within the current constraints. Modzelewski encouraged the attendees to consider future optimisations that might cause backwards-incompatible behaviour changes.



Core Development and the PSF, with Thomas Wouters

This was another short announcement from Thomas Wouters on behalf of the Steering Council. After sponsorship from Google providing funding for the first ever CPython Developer-In-Residence (Łukasz Langa), Meta has provided sponsorship for a second year. The Steering Council also now has sufficient funds to hire a second Developer-In-Residence – and attendees were notified that they were open to the idea of hiring somebody who was not currently a core developer.



“Forward classes”, with Larry Hastings

Larry Hastings, CPython core developer, gave a brief presentation on a proposal he had sent round to the python-dev mailing list in recent days: a “forward class” declaration that would avoid all issues with two competing typing PEPs: PEP 563 and PEP 649. In brief, the proposed syntax would look something like this:



forward class X() continue class X: # class body goes here def __init__(self, key): self.key = key


In theory, according to Hastings, this syntax could avoid issues around runtime evaluation of annotations that have plagued PEP 563, while also circumventing many of the edge cases that unexpectedly fail in a world where PEP 649 is implemented.

The idea was in its early stages, and reaction to the proposal was mixed. The next day, at the Typing Summit, there was more enthusiasm voiced for a plan laid out by Carl Meyer for a tweaked version of Hastings’s earlier attempt at solving this problem: PEP 649.



Better fields access, with Samuel Colvin

Samuel Colvin, maintainer of the Pydantic library, gave a short presentation on a proposal (recently discussed on discuss.python.org) to reduce name clashes between field names in a subclass, and method names in a base class.

The problem is simple. Suppose you’re a maintainer of a library, whatever_library. You release Version 1 of your library, and one user start to use your library to make classes like the following:

from whatever_library import BaseModel class Farmer(BaseModel): name: str fields: list[str]


Both the user and the maintainer are happy, until the maintainer releases Version 2 of the library. Version 2 adds a method, .fields() to BaseModel, which will print out all the field names of a subclass. But this creates a name clash with your user’s existing code, wich has fields as the name of an instance attribute rather than a method.

Colvin briefly sketched out an idea for a new way of looking up names that would make it unambiguous whether the name being accessed was a method or attribute.



class Farmer(BaseModel): $name: str $fields: list[str] farmer = Farmer(name='Jones', fields=['meadow', 'highlands']) print(farmer.$fields) # -> ['meadow', 'highlands'] print(farmer.fields()) # -> ['name', 'fields']

13-05-2022

18:35

Kiesraad kiest Paragon voor verkiezingssoftware [Computable]

In opdracht van de Kiesraad gaat Paragon Customer Communications software leveren voor de vaststelling van de verkiezingsuitslag en indiening van de kandidatenlijsten. Het bedrijf uit Alphen aan den Rijn sleepte deze week de aanbesteding voor ontwikkeling,...

Rechter: politie mag hacken en pgp-berichten inkijken [Computable]

De politie mag systemen voor het versleutelen van informatie kraken en live meekijken in die versleutelde berichten. Dat oordeelt een Amsterdamse rechter in een zaak waarin een verdachte is veroordeeld op basis van EncroChat-data. Advocaten zijn...

Musk drukt op pauzeknop bij Twitter-deal [Computable]

De onderhandelingen tussen Twitter en Elon Musk zijn tijdelijk opgeschort. Musk, die het sociale medium wil overnemen voor 44 miljard dollar, wacht het onderzoek naar nep- en spam-accounts even af. Er was al veel gedoe over...

Advies: wacht met 3,5 GHz tot Inmarsat weg is [Computable]

Het duurt waarschijnlijk tot eind 2023 voordat de 3,5-GHz-frequentieband beschikbaar komt voor openbare mobiele-communicatiediensten. Er is weliswaar veel vraag naar extra frequentieruimte, maar op de daarvoor afgesproken 3,5-GHz-band kan dat storen met noodoproepen van de lucht-...

Meer samenwerking tussen privacywaakhonden [Computable]

De Autoriteit Persoonsgegevens (AP) en haar Europese evenknieën werken steeds vaker samen. Afgelopen jaar gebeurde dat in ruim vijfhonderd internationale onderzoeken, waarvan 141 werden afgerond. Dat meldt de European Data Protection Board (EDPB), het Europese samenwerkingsverband...

Python for Beginners: Convert List of Lists to CSV File in Python [Planet Python]

Lists are one of the most frequently used data structures in python. In this article, we will discuss how we can convert a list of lists to a CSV file in python.

List of Lists to CSV in Python Using csv.writer() 

The csv module provides us with different methods to perform various operations on a CSV file. To convert a list of lists to csv in python, we can use the csv.writer() method along with the csv.writerow() method. For this, we will use the following steps.

  • First, we will open a csv file in write mode using the open() function. The open() function takes the file name as the first input argument and the literal “w” as the second input argument to show that the file will be opened in the write mode. It returns a file object that contains the empty csv file created by the open() function.
  • After opening the file, we will create a csv.writer object using the csv.writer() method. The csv.writer() method takes the file object as an input argument and returns a writer object. Once the writer object is created, we can add data from the list of lists to the csv file using the csv.writerow() method.
  • The csv.writerow() method, when invoked on a writer object, takes a list of values and adds it to the csv file referred by the writer object.
  • First, we will add the header for the CSV file. For this, we will pass a list of column names to the writerow() method
  • After adding the header, we will use a for loop with the writerow() method to add each list to the csv file. Here, we will pass each list one by one to the writerow() method. The writerow() method adds the list to the csv file. 

After execution of the for loop, the data from the list will be added to the CSV file. To save the data, you should close the file using the close() method. Otherwise, no changes will be saved to the csv file.

The source code to convert a list of lists to a csv file using the csv.writer() method is as follows.

import csv

listOfLists = [["Aditya", 1, "Python"], ["Sam", 2, 'Java'], ['Chris', 3, 'C++'], ['Joel', 4, 'TypeScript']]
print("THe list of lists is:")
print(listOfLists)
myFile = open('demo_file.csv', 'w')
writer = csv.writer(myFile)
writer.writerow(['Name', 'Roll', 'Language'])
for data_list in listOfLists:
    writer.writerow(data_list)
myFile.close()
myFile = open('demo_file.csv', 'r')
print("The content of the csv file is:")
print(myFile.read())
myFile.close()

Output:

THe list of lists is:
[['Aditya', 1, 'Python'], ['Sam', 2, 'Java'], ['Chris', 3, 'C++'], ['Joel', 4, 'TypeScript']]
The content of the csv file is:
Name,Roll,Language
Aditya,1,Python
Sam,2,Java
Chris,3,C++
Joel,4,TypeScript

Conclusion

In this article, we have discussed an approach to convert a list of lists to csv file in python. In these approaches,  each list will be added to the csv file irrespective of whether it has the same number of elements as compared to the columns in the csv or not. Thus it is advised to make sure that each element should have the same number of element. Also, You should make sure that the order of element present in the lists should be same. Otherwise, the data appended to the csv file will become inconsistent and will lead to errors. 

To know more about lists in python, you can read this article on list comprehension in python. You might also like this article on dictionary comprehension in python.

The post Convert List of Lists to CSV File in Python appeared first on PythonForBeginners.com.

Test and Code: 187: Teaching Web Development, including Front End Testing [Planet Python]

When you are teaching someone web development skills, when is the right time to start teaching code quality and testing practices?

Karl Stolley believes it's never too early. Let's hear how he incorporates code quality in his courses.

Our discussion includes:

  • starting people off with good dev practices and tools
  • linting
  • html and css validation
  • visual regression testing
  • using local dev servers, including https
  • incorporating testing with git hooks
  • testing to aid in css optimization and refactoring
  • Backstop
  • Nightwatch
  • BrowserStack
  • the tree legged stool of learning and progressing as a developer: testing, version control, and documentation

Karl is also writing a book on WebRTC, so we jump into that a bit too.

Special Guest: Karl Stolley.

Sponsored By:

Links:

<p>When you are teaching someone web development skills, when is the right time to start teaching code quality and testing practices?</p> <p>Karl Stolley believes it&#39;s never too early. Let&#39;s hear how he incorporates code quality in his courses.</p> <p>Our discussion includes:</p> <ul> <li>starting people off with good dev practices and tools</li> <li>linting</li> <li>html and css validation</li> <li>visual regression testing</li> <li>using local dev servers, including https</li> <li>incorporating testing with git hooks</li> <li>testing to aid in css optimization and refactoring</li> <li>Backstop</li> <li>Nightwatch</li> <li>BrowserStack</li> <li>the tree legged stool of learning and progressing as a developer: testing, version control, and documentation</li> </ul> <p>Karl is also writing a book on WebRTC, so we jump into that a bit too.</p><p>Special Guest: Karl Stolley.</p><p>Sponsored By:</p><ul><li><a href="https://www.patreon.com/testpodcast" rel="nofollow">Patreon Supporters</a>: <a href="https://www.patreon.com/testpodcast" rel="nofollow">Help support the show with as little as $1 per month and be the first to know when new episodes come out.</a></li><li><a href="https://pythontest.com/pytest-book/" rel="nofollow">Python Testing with pytest, 2nd edition</a>: <a href="https://pythontest.com/pytest-book/" rel="nofollow">The fastest way to learn pytest and practical testing practices.</a></li></ul><p>Links:</p><ul><li><a href="https://garris.github.io/BackstopJS/" title="Backstop" rel="nofollow">Backstop</a></li><li><a href="https://nightwatchjs.org/" title="Nightwatch" rel="nofollow">Nightwatch</a></li><li><a href="https://www.browserstack.com/" title="BrowserStack" rel="nofollow">BrowserStack</a></li><li><a href="https://pragprog.com/titles/ksrtc/programming-webrtc/" title="Programming WebRTC: Build Real-Time Streaming Applications for the Web by Karl Stolley" rel="nofollow">Programming WebRTC: Build Real-Time Streaming Applications for the Web by Karl Stolley</a></li></ul>

12-05-2022

19:47

Skills-softwarebedrijf AG5 haalt 1,2 miljoen op [Computable]

Het Amsterdamse AG5 heeft in een investeringsronde 1,2 míljoen euro opgehaald bij tech-investeerder Peak. Met de investering zal AG5 zijn zogeheten skills management software verder ontwikkelen en wereldwijd op de markt brengen.

Centric vernieuwt bedrijfsvoering met IFS Cloud [Computable]

Centric heeft voor de bedrijfsvoering gekozen voor IFS Cloud als nieuw enterprise resource planning (erp)-software. IFS-partner Eqeep zal de oplossing implementeren, ondersteunen én uitrollen over alle activiteiten van Centric in de tien Europese landen waar de...

3d-printerbedrijven MakerBot en Ultimaker fuseren [Computable]

Het Amerikaanse MakerBot en het Nederlandse Ultimaker, twee aanbieders op het gebied van desktop 3d-printen, kondigen een fusie aan. De samensmelting wordt ondersteund door de bestaande investeerders NPM Capital (Ultimaker) en Stratasys (MakerBot). Zij steken 62,4...

15 miljoen voor Leuvense chipbouwer Pharrowtech [Computable]

Pharrowtech, een ontwerper van chips voor draadloze communicatie uit Leuven, heeft in een eerste financieringsronde vijftien miljoen euro opgehaald. De startup steekt het geld in onder meer de ontwikkeling van de volgende generatie 60 GHz draadloze...

CDA eist ingrijpen op 'monopolie' Chipsoft [Computable]

Het CDA eist dat minister Ernst Kuipers (VWS) ingrijpt op de markt van ziekenhuissoftware. Kamerlid namens die partij, Joba van den Berg, stelt dat het ongezond is dat één partij - het Amsterdamse softwarebedrijf Chipsoft -...

Van Teijlingen trekt zich terug uit top SoftwareOne [Computable]

Michel van Teijlingen stopt om gezondheidsredenen als de baas van SoftwareOne in de Benelux. Sinds vorig jaar oktober was hij daar Federation Lead. Ook geeft hij zijn functie van algemeen directeur bij SoftwareOne Nederland op.

NVIDIA Make Shock Open-Source Announcement [OMG! Ubuntu!]

Official open source NVIDIA graphics drivers get a step closer to reality as NVIDIA announced the first release of open GPU modules its recent hardware.

This post, NVIDIA Make Shock Open-Source Announcement is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

Ubuntu Preview on WSL Brings Ubuntu Daily Builds to Windows [OMG! Ubuntu!]

Ubuntu + WSL (new ubuntu logo)It's now much easier to try Ubuntu daily builds on Windows 10 and 11 using the Ubuntu Preview on WSL app recently added to the Microsoft Store.

This post, Ubuntu Preview on WSL Brings Ubuntu Daily Builds to Windows is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

How to Disable Animations in Ubuntu 22.04 LTS [OMG! Ubuntu!]

framework laptop ubuntuIt's easy to disable animations in Ubuntu 22.04. You don't need extra apps or commands; a setting to turn off UI effects is now present in the Settings app.

This post, How to Disable Animations in Ubuntu 22.04 LTS is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

How to Use the VI Editor in Linux [Linux Journal - The Original Magazine of the Linux Community]

How to Use the VI Editor in Linux

If you’re searching for info related to the VI editor, this article is for you. So, what’s VI editor? VI is a text editor that’s screen-oriented and the most popular in the Linux world. The reasons for its popularity are 1) availability for almost all Linux distros, 2) VI works the same throughout multiple platforms, and 3) its user-friendly features. Currently, VI Improved or VIM is the most used advanced counterpart of VI.

To work on the VI text editor, you have to know how to use the VI editor in Linux. Let’s find it out from this article.

Modes of VI Text Editor

VI text editor works in two modes, 1) Command mode and 2) Insert mode. In the command mode, users’ commands are taken to take action on a file. The VI editor, usually, starts in the command mode. Here, the words typed act as commands. So, you should be in the command mode while passing a command.

On the other hand, in the Insert mode, file editing is done. Here, the text is inserted into the file. So, you need to be in the insert mode to enter text. Just type ‘i’ to be in the insert mode. Use the Esc key to switch from insert mode to command mode in the editor. If you don’t know your current mode, press the Esc key twice. This takes you to the command mode.

Launch VI Text Editor 

First, you need to launch the VI editor to begin working on it. To launch the editor, open your Linux terminal and then type:

vi  or 

And if you mention an existing file, VI would open it to edit. Alternatively, you’re free to create a completely new file.

VI Editing Commands

You need to be in the command mode to run editing commands in the VI editor. VI is case-sensitive. Hence, make sure you use the commands in the correct letter case. Also, make sure you type the right command to avoid undesired changes. Below are some of the essential commands to use in VI.

i – Inserts at cursor (gets into the insert mode)

a – Writes after the cursor (gets into the insert mode)

A – Writes at the ending of a line (gets into the insert mode)

o – Opens a new line (gets into the insert mode)

ESC – Terminates the insert mode

u – Undo the last change

U – Undo all changes of the entire line

D – Deletes the content of a line after the cursor

R – Overwrites characters from the cursor onwards

r – Replaces a character

s – Substitutes one character under the cursor and continue to insert

S – Substitutes a full line and start inserting at the beginning of a line

11-05-2022

18:53

Snellere adoptie van edge computing-architecturen [Computable]

Red Hat introduceert nieuwe mogelijkheden voor ‘edge computing’ binnen zijn open hybride cloud-portfolio. Deze leverancier van open source-oplossingen zegt de adoptie van edge computing-architecturen te kunnen versnellen. Dit wordt mogelijk door te zorgen voor minder complexiteit,...

Cegeka formeert aparte cybersecurity-tak [Computable]

Automatiseerder Cegeka bundelt al zijn activiteiten rond monitoren, detecteren en reageren op cybersecurity-incidenten in de nieuwe divisie Cyber Security Operations & Response Center, afgekort tot C-SOR²C. Dat maakt het bedrijf bekend tijdens het vandaag gestarte Cybersec...

Cloud-oplossingen van SAS in opmars [Computable]

Analytics-specialist SAS haalde vorig jaar negentien procent meer omzet uit cloud-oplossingen. Het Amerikaanse bedrijf breidt zijn branche-specifieke portfolio uit met software voor levenswetenschap, de energiesector en marketing-technologie.

Exact Globe ondergaat volledige make-over [Computable]

Exact brengt een nieuwe versie uit van Exact Globe, het programma waar 13.000 ondernemingen hun bedrijfsprocessen mee aansturen. Bij Exact Globe+ is de software onder de motorkap volledig vernieuwd en voorzien van de laatste technologie.

Salesforce versimpelt projectadministratie Radboudumc [Computable]

Salesforce en het Utrechtse Growtivity gaan het Radboud Universitair Medisch Centrum (Radboudumc) in Nijmegen ondersteunen bij de administratie en procesondersteuning van wetenschappelijk onderzoek. Het is de bedoeling deze processen te vereenvoudigen en versnellen.

Digitale transitie gemeenten en provincies in de knel [Computable]

Investeringen in de digitale en groene transitie komen in de knel door een geschil over de manier waarop Nederland de gereserveerde gelden uit het Europese Corona Herstelfonds wil aanwenden. De VNG (gemeenten) en het IPO (provincies)...

Herke ICT heet voortaan TreeICT [Computable]

Bouwautomatiseerder Herke ICT gaat voortaan als TreeICT door het leven. Het bedrijf uit Alkmaar, in 1998 opgericht door Herke Dekker, kwam begin vorig jaar in handen van investeerder Nedvest. Die bracht het onder bij op de...

KDE Connect is Now Available for iPhone & iPad [OMG! Ubuntu!]

KDE Connect loveA KDE Connect app is available on the Apple App Store. It lets iPhone & iPad users benefit from integration between their device(s) and the Linux desktop.

This post, KDE Connect is Now Available for iPhone & iPad is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

Deb-Get is ‘Apt-Get’ for 3rd-Party Ubuntu Software [OMG! Ubuntu!]

All fo your favourite extra-repo Ubuntu apps are now a single command away. Deb-Get is a tool that installs deb files from website from the command line.

This post, Deb-Get is ‘Apt-Get’ for 3rd-Party Ubuntu Software is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

Firefox 100 is Now Available to Download 🥳 [OMG! Ubuntu!]

Mozilla Firefox 100 is available to download. The new release includes new site theme options, subtitles in picture-in-picture mode, and Linux bug fixes.

This post, Firefox 100 is Now Available to Download 🥳 is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

Open Source Video Editor Kdenlive Gains 10-Bit Color Support [OMG! Ubuntu!]

kdenlive logoA new version of Kdenlive, a Qt-based open source video editor, is available to download. We recap Kdenlive 22.04's new features and UI tweaks.

This post, Open Source Video Editor Kdenlive Gains 10-Bit Color Support is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

Rhythmbox 3.4.5 Improves Its Support for Podcasts [OMG! Ubuntu!]

Rhythmbox 3.4.5 is available to download. It includes big improvements to podcast downloading, playback, and management plus a raft of smaller tweaks.

This post, Rhythmbox 3.4.5 Improves Its Support for Podcasts is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

LibreOffice 7.3.3 Community available for download [Press Releases Archives - The Document Foundation Blog]

LibreOffice is now available for download also on SourceForge

Berlin, May 4, 2022 – LibreOffice 7.3.3 Community, the third minor release of the LibreOffice 7.3 family, targeted at technology enthusiasts and power users, is available for download from https://www.libreoffice.org/download/. In addition to the LibreOffice website, starting from tomorrow it will be possible to download LibreOffice from SourceForge: https://sourceforge.net/projects/libreoffice/files/libreoffice/stable/.

Logan Abbott, SourceForge’s President and COO, says: “We’re happy to add to our open source download library an amazing open source office suite such as LibreOffice, which is without a doubt one of the best office suites ever and one which I personally use often. I highly recommend it to anyone that needs a powerful FOSS office suite.”

The LibreOffice 7.3 family offers the highest level of compatibility in the office suite market segment, starting with native support for the OpenDocument Format (ODF) – beating proprietary formats in the areas of security and robustness – to superior support for DOCX, XLSX and PPTX files.

Microsoft files are still based on the proprietary format deprecated by ISO in 2008, which is artificially complex, and not on the ISO approved standard. This lack of respect for the ISO standard format may create issues to LibreOffice, and is a huge obstacle for transparent interoperability.

LibreOffice for enterprise deployments

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners, with long-term support options, professional assistance, custom features and Service Level Agreements: https://www.libreoffice.org/download/libreoffice-in-business/.

LibreOffice Community and the LibreOffice Enterprise family of products are based on the LibreOffice Technology platform, the result of years of development efforts with the objective of providing a state of the art office suite not only for the desktop but also for mobile and the cloud.

Products based on LibreOffice Technology are available for major desktop operating systems (Windows, macOS, Linux and Chrome OS), mobile platforms (Android and iOS) and the cloud. They may have a different name, according to each company brand strategy, but they share the same LibreOffice unique advantages, robustness and flexibility.

Availability of LibreOffice 7.3.3 Community

LibreOffice 7.3.3 Community represents the bleeding edge in term of features for open source office suites. For users whose main objective is personal productivity and therefore prefer a release that has undergone more testing and bug fixing over the new features, The Document Foundation provides LibreOffice 7.2.6 and soon LibreOffice 7.2.7.

LibreOffice 7.3.3 change log pages are available on TDF’s wiki: https://wiki.documentfoundation.org/Releases/7.3.3/RC1 (changed in RC1) and https://wiki.documentfoundation.org/Releases/7.3.3/RC2 (changed in RC2). Over 80 bugs and regressions have been solved.

LibreOffice Technology based products for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/, while for App Stores and ChromeOS are listed here: https://www.libreoffice.org/download/libreoffice-from-microsoft-and-mac-app-stores/

LibreOffice individual users are assisted by a global community of volunteers: https://www.libreoffice.org/get-help/community-support/. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help the project to make all of these resources available.

LibreOffice users are invited to join the community at https://ask.libreoffice.org, where they can get and provide user-to-user support. People willing to contribute their time and professional skills to the project can visit the dedicated website at https://whatcanidoforlibreoffice.org.

LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools at https://www.libreoffice.org/donate.

LibreOffice 7.3.3 is built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org.

Primer to Container Security [Linux Journal - The Original Magazine of the Linux Community]

Primer to Container Security

Containers are considered to be a standard way of deploying these microservices to the cloud. Containers are better than virtual machines in almost all ways except security, which may be the main barrier to their widespread adoption.

This article will provide a better understanding of container security and available techniques to secure them.

A Linux container can be defined as a process or a set of processes running in the userspace that is/are isolated from the rest of the system by different kernel tools.

Containers are great alternatives to virtual machines (VMs). Even though containers and virtual machines provide the same isolation benefits, they differ in the way that containers provide operating system virtualization instead of hardware. This makes them lightweight, faster to start, and consumes less memory.

As multiple containers share the same kernel, the solution is less secure than the VMs, where they have their copies of OS, libraries, dedicated resources, and applications. That makes VM excellently secure but because of their high storage size and reduced performance, it creates a limitation on the total number of VMs which can be run simultaneously on a server. Further VMs take a lot of time to boot.

The introduction of microservice architecture has changed the way of developing software. Microservices allow the development of software in small self-contained independent services. This makes the application easier to scale and provides agility.

If a part of the software needs to be rewritten it can easily be done by changing only that part of the code without interrupting any other service, which wasn't possible with the monolithic kernel.

Protection requirement use cases and solutions
Protection requirement use cases and solutions

1) Linux Kernel Features

a. Namespaces

Namespaces ensure the isolation of resources for processes running in a container to that of others. They partition the kernel resources for different processes. One set of processes in a separate namespace will see one set of resources while another set of processes will see another. Processes in different see different process IDs, hostnames, user IDs, file names, names for network access, and some interprocess communication. Hence, each file system namespace has its private mount table and root directory.

01-05-2022

29-04-2022

13:47

Scrolling Up and Down in the Linux Terminal [Linux Journal - The Original Magazine of the Linux Community]

Scrolling Up and Down in the Linux Terminal

Are you looking for the technique of scrolling through your Linux terminal? Brace yourself. This article is written for you. Today you’ll learn how to scroll up and down in the Linux terminal. So, let’s begin.

Why You Need to Scroll in Linux Terminal

But before going ahead and learning about up and down scrolling in the terminal, let’s find out what’s the importance of scrolling in the Linux terminal. When you have a lot of output printed on your terminal screen, it becomes helpful to make your Linux terminal behave in a particular manner. You can clear the terminal at any time. This may make your work easier and quicker to complete. But what if you’re troubleshooting an issue and you need a previously entered command, then scrolling up or down comes to the rescue.

Various shortcuts and commands allow you to perform scrolling in the Linux terminal whenever you want. So, for easy navigation in your terminal using the keyboard, read on.

How to Scroll Up and Down in Linux Terminal

In the Linux terminal, you can scroll up by page using the Shift + PageUp shortcut. And to scroll down in the terminal, use Shift + PageDown. To go up or down in the terminal by line, use Ctrl + Shift + Up or Ctrl + Shift + Down respectively.

Key Combinations Used in Scrolling

Following are some key combinations that are useful in scrolling through the Linux terminal. 

Ctrl+End: This allows you to scroll down to your cursor.

Ctrl+Page Up: This key combination lets you scroll up by one page.

Ctrl+Page Dn: This lets you scroll down by one page.

Ctrl+Line Up: To scroll up by one line, use this key combination.

Scrolling Up and Down with More Command

The more command allows you to see the text files within the command prompt. For bigger files (for example, log files), it shows one screen at one time. The more command is also used to scroll up and down within the page. To scroll up the display one line at a time, press the Enter key. To scroll a screenful at a time, use Spacebar. To do backward scrolling, press ‘b’.

How to Disable Scrolling in the Terminal

To disable the scrollbar, follow the steps given in this section. First, on the window, press the Menu button residing in the top-right corner. Then select Preferences. From the Profiles section in the sidebar, select the profile you’re currently using. Then select the Scrolling option. Finally, uncheck the Show scrollbar to disable the scrolling feature in the terminal. Your preference will be saved immediately.

28-04-2022

27-04-2022

12:10

Self-Hosted Static Homepages: Dashy Vs. Homer [Linux Journal - The Original Magazine of the Linux Community]

Self-Hosted Static Homepages: Dashy Vs. Homer

Authors: Brandon Hopkins, Suparna Ganguly

Self-hosted homepages are a great way to manage your home lab or cloud services. If you’re anything like me chances are, you have a variety of docker containers, media servers, and NAS portals all over the place. Using simple bookmarks to keep track of everything often isn’t enough. With a self-hosted homepage, you can view everything you need from anywhere. And you can add integrations and other features to help you better manage everything you need to.

Dashy and Homer are two separate static homepage applications. These are used in home labs and on the cloud to help people organize and manage their services, docker containers, and web bookmarks. This article will overview exactly what these self-hosted homepages have to offer.

Dashy

Dashy is a 100% free and open-source, self-hosted, highly customizable homepage app for your server that has a strong focus on privacy. It offers an easy-to-use visual editor, widgets, status checking, themes, and lots more features. Below are the features that you can avail yourself of with Dashy.

Live Demo: https://demo.dashy.to/

Customize

You can customize your Dashy as how you want to fit in your use case. From the UI, choose from different layouts, show/hide components, item sizes, switch themes, and a lot more. You can customize each area of your dashboard. There are config options available for custom HTML header, footer, title, navbar links, etc. If you don’t need something, just hide it!

Dashy offers multiple color themes having a UI color editor and support towards custom CSS. Since all of the properties use CSS variables, it is quite easy to override. In addition to themes, you can get a host of icon options, such as Font-Awesome, home lab icons, Material Design Icons, normal images, emojis, auto-fetching favicons, etc.

Integrations

26-04-2022

08:54

GIMP in a Pinch: Life after Desktop [Linux Journal - The Original Magazine of the Linux Community]

GIMP in a Pinch: Life after Desktop

So my Dell XPS 13 DE laptop running Ubuntu died on me today. Let’s just say I probably should not have attempted to be efficient and take a bath and work at the same time!

Unfortunately, as life always seems to be, you always need something at a time that you don’t have it and that is the case today. I have some pictures that I need to edit for a website, and I only know and use GIMP. I took a look at my PC inventory at home, and I had two options:

  1. Macbook Air: My roommate’s computer
  2. HP Chromebook 11: A phase of my life where I attempted to streamline my life and simplify which lasted two weeks

My roommate was using his computer, so it really only left me with one option, the chromebook. I also did not have a desire to learn another OS today as I have done enough distro hopping in the last few months. I charged and booted up the chromebook and started to figure out how I could get GIMP onto it. Interestingly enough, there are not many clear cut options to running GIMP on an Android device. There was an option to run a Linux developer environment on the chromebook, but it required 10GB of space which I didn’t have. Therefore, option two was to find an app on the Google Play Store.

Typing GIMP brought me to an app called XGimp Image Editor from DMobileAndroid, and I installed and loaded it with an image to only find this:

gimp-image-1

This definitely is nothing like GIMP and appeared to be very limited in functionality anyway. I could see why it had garnered a 1.4 star rating as it definitely is not what someone would expect when they are looking for something similar to GIMP.

So I took a look at the other options, and there was another app called GIMP from Userland Technologies. It does cost $1.99, but it was a one-time charge and seemed to be the only other option on the Play Store. Reviewing the screenshots and the description of the application seemed to suggest that this would be the actual GIMP app that I was using on my desktop so I went ahead and downloaded it. Installation was relatively quick, and I started running it and to my surprise, here is what I saw:

gimp-image-3

It appears that the application basically is a Linux desktop build that automatically launches the desktop version of GIMP. Therefore, it really is GIMP. I loaded up an image which was also relatively easy to do as it seamlessly connected to my folders on my chromebook.

25-04-2022

23-04-2022

21-04-2022

10:15

Geek Guide: Purpose-Built Linux for Embedded Solutions [Linux Journal - The Original Magazine of the Linux Community]

Geek Guide: Purpose-Built Linux for Embedded Solutions

The explosive growth of the Internet of Things (IoT) is just one of several trends that is fueling the demand for intelligent devices at the edge. Increasingly, embedded devices use Linux to leverage libraries and code as well as Linux OS expertise to deliver functionality faster, simplify ongoing maintenance, and provide the most flexibility and performance for embedded device developers.

This e-book looks at the various approaches to providing both Linux and a build environment for embedded devices and offers best practices on how organizations can accelerate development while reducing overall project cost throughout the entire device lifecycle.

Download PDF

17-04-2022

08-04-2022

06-04-2022

09:26

How to Install and Uninstall KernelCare [Linux Journal - The Original Magazine of the Linux Community]

How to Install and Uninstall KernelCare

In my previous article, I described what KernelCare is. In this article, I’m going to tell you how to install, uninstall, clear the KernelCare cache, and other important information regarding KernelCare. In case you’re yet to know about the product, here’s a short recap. KernelCare provides automated security updates to the Linux kernel. It offers patches and error fixes for various Linux kernels.

So, if you are looking for anything similar, you have landed upon the right page. Let’s begin without further ado.

Prerequisites to Install KernelCare

Before installing KernelCare in your Linux system, ensure that you have either of these operating systems as given below.

  • 64-bit RHEL/CentOS 5.x, 6.x, 7.x

  • CloudLinux 5.x, 6.x

  • Virtuozzo/PCS/OpenVZ 2.6.32

  • Debian 6.x, 7.x

  • Ubuntu 14.04

Note: In case you have KernelCare installed on your machine, it might be useful to know the current KernelCare version before installing KernelCare next time. To know the current version run the below-given command as root:

/usr/bin/kcarectl –uname

Checking Kernel’s Compatibility with KernelCare

To check if your current kernel is compatible with KernelCare, you need to use the following code.

curl -s -L https://kernelcare.com/checker | python

Installing KernelCare

Run the following command to install KernelCare.

curl -s -L https://kernelcare.com/installer | bash

If you use an IP-based license, you don’t need to do anything more. However, if you use a key-based license, run the following command.

/usr/bin/kcarectl --register KEY

KEY is a registration key code string. It’s given to you when you sign up to purchase or to go through a trial of KernelCare. Let’s see an example.

[root@unixcop:~]/usr/bin/kcarectl --register XXXXXXXXXXX

Server Registered

The above example shows a registration key code string.

If you experience a “Key limit reached” error message, then you need to first unregister the server after the trial ends. To do the same type:

kcarectl --unregister

Checking If the Patches Applied Successfully

For checking if the patches have been applied successfully or not, use the command as given below.

/usr/bin/kcarectl --info

Now the software will check for new patches automatically every 4 hours.

If you want to run updates manually, run:

31-03-2022

17:13

LibreOffice 7.3.2 Community available for download [Press Releases Archives - The Document Foundation Blog]

Berlin, March 31, 2022 – LibreOffice 7.3.2 Community, the second minor release of the LibreOffice 7.3 family, targeted at technology enthusiasts and power users, is available for download from https://www.libreoffice.org/download/.

The LibreOffice 7.3 family offers the highest level of compatibility in the office suite market segment, starting with native support for the OpenDocument Format (ODF) – beating proprietary formats in the areas of security and robustness – to superior support for DOCX, XLSX and PPTX files.

Microsoft files are still based on the proprietary format deprecated by ISO in 2008, which is artificially complex, and not on the ISO approved standard. This lack of respect for the ISO standard format may create issues to LibreOffice, and is a huge obstacle for transparent interoperability.

LibreOffice for enterprise deployments

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners, with long-term support options, professional assistance, custom features and Service Level Agreements: https://www.libreoffice.org/download/libreoffice-in-business/.

LibreOffice Community and the LibreOffice Enterprise family of products are based on the LibreOffice Technology platform, the result of years of development efforts with the objective of providing a state of the art office suite not only for the desktop but also for mobile and the cloud.

Products based on LibreOffice Technology are available for major desktop operating systems (Windows, macOS, Linux and Chrome OS), mobile platforms (Android and iOS) and the cloud. They may have a different name, according to each company brand strategy, but they share the same LibreOffice unique advantages, robustness and flexibility.

Availability of LibreOffice 7.3.2 Community

LibreOffice 7.3.2 Community represents the bleeding edge in term of features for open source office suites. For users whose main objective is personal productivity and therefore prefer a release that has undergone more testing and bug fixing over the new features, The Document Foundation provides LibreOffice 7.2.6.

LibreOffice 7.3.2 change log pages are available on TDF’s wiki: https://wiki.documentfoundation.org/Releases/7.3.2/RC1 (changed in RC1) and https://wiki.documentfoundation.org/Releases/7.3.2/RC2 (changed in RC2). Over 80 bugs and regressions have been solved.

LibreOffice Technology based products for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/, while for App Stores and ChromeOS are listed here: https://www.libreoffice.org/download/libreoffice-from-microsoft-and-mac-app-stores/

LibreOffice individual users are assisted by a global community of volunteers: https://www.libreoffice.org/get-help/community-support/. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help the project to make all of these resources available.

LibreOffice users are invited to join the community at https://ask.libreoffice.org, where they can get and provide user-to-user support. People willing to contribute their time and professional skills to the project can visit the dedicated website at https://whatcanidoforlibreoffice.org.

LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools at https://www.libreoffice.org/donate.

LibreOffice 7.3.2 is built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org.

What’s KernelCare? [Linux Journal - The Original Magazine of the Linux Community]

What’s KernelCare?

This article explains all that you need to know about KernelCare. But before studying about KernelCare, let’s do a quick recap of the Linux kernel. It’ll help you understand KernelCare better. The Linux kernel is the core part of Linux OS. It resides in memory and prompts the CPU what to do.

Now let’s begin with today’s topic which is KernelCare. And if you’re a system administrator this article is going to present valuable information for you.

What is KernelCare?

So, what’s KernelCare? KernelCare is a patching service that offers live security updates for Linux kernels, shared libraries, and embedded devices. It patches security vulnerabilities inside the Linux kernel without creating service interruptions or any downtime. Once you install KernelCare on the server, security updates automatically get applied every 4 hours on your server. It dismisses the need for rebooting your server after making updates.

It is a commercial product and is licensed under GNU GPL version 2. Cloud Linux, Inc developed this product. The first beta version of KernelCare was released in March 2014 and its commercial launch was in May 2014. Since then they have added various useful integrations for automation tools, vulnerability scanners, and others. 

Operating systems supported by KernelCare include CentOS/RHEL 5, 6, 7; Cloud Linux 5, 6; OpenVZ, PCS, Virtuozzo, Debian 6, 7; and Ubuntu 14.04.

Is KernelCare Important?

Are you wondering if KernelCare is important for you or not? Find out here. By installing the latest kernel security patches, you are able to minimize potential risks. When you try to update the Linux kernel manually, it may take hours. Apart from the server downtime, it can be a stressful job for the system admins and also for the clients.

Once the kernel updates are applied, the server needs a reboot. This is usually done during off-peak work hours. And this causes some additional stress. However, ignoring server reboots can cause a whole lot of security issues. It’s seen that, even after rebooting, the server experiences issues and doesn’t easily come back up. Fixing such issues is a trouble for the system admins. Often the system admin needs to roll back all the applied updates to get the server up quickly.

With KernelCare, you can avoid such issues.

How Does KernelCare Work?

KernelCare eliminates non-compliance and service interruptions caused by system reboots. KernelCare agent resides on your server. It periodically checks for new updates. In case it finds any, the agent downloads those and applies them to the running kernel. A KernelCare patch can be defined as a piece of code that’s used to substitute buggy code in the kernel. 

Getting Started with Docker Semi-Self-Hosting on Linode [Linux Journal - The Original Magazine of the Linux Community]

Getting Started with Docker Semi-Self-Hosting on Linode

With the evolution of technology, we find ourselves needing to be even more vigilant with our online security every day. Our browsing and shopping behaviors are also being continuously tracked online via tracking cookies being dropped on our browsers that we allow by clicking the “I Accept” button next to deliberately long agreements on websites before we can get the full benefit of said site.

Watch this article:

Additionally, hackers are always looking for a target and it's common for even big companies to have their servers compromised in any number of ways and have sensitive data leaked, often to the highest bidder.

These are just some of the reasons that I started looking into self-hosting as much of my own data as I could.

Because not everyone has the option to self-host on their own, private hardware, whether it's for lack of hardware, or because their ISP makes it difficult or impossible to do so, I want to show you what I believe to be the next best step, and that's a semi-self-hosted solution on Linode.

Let's jump right in!

Setting up a Linode

First things first, you’ll need a Docker server set up. Linode has made that process very simple and you can set one up for just a few bucks a month and can add a private IP address (for free) and backups for just a couple bucks more per month.

Get logged into your Linode account click on "Create Linode".

Don't have a Linode account?  Get $100 in credit clicking here

On the "Create" page, click on the "Marketplace" tab and scroll down to the "Docker" option. Click it.

With Docker selected, scroll down and close the "Advanced Options" as we won't be using them.

Below that, we'll select the most recent version of Debian (version 10 at the time of writing).

In order to get the the lowest latency for your setup, select a Region nearest you.

When we get to the "Linode Plan" area, find an option that fits your budget. You can always start with a small plan and upgrade later as your needs grow.

Next, enter a "Linode Label" as an identifier for you. You can enter tags if you want.

Enter a Root Password and import an SSH key if you have one. If you don't that's fine, you don't need to use an SSH key. If you'd like to generate one and use it, you can find more information about how to do so here "Creating an SSH Key Pair and Configuring Public Key Authentication on a Server").

30-03-2022

28-03-2022

24-03-2022

20:05

5 Lesser-Known Open Source Web Browsers for Linux in 2022 [Linux Journal - The Original Magazine of the Linux Community]

5 Lesser-Known Open Source Web Browsers for Linux in 2022

If you’re in search of open-source web browsers that are lesser-known to you, this article is written for you. This article takes you through 5 amazing open-source web browsers that are readily available for your Linux system. Let’s find out the options to choose from in 2022.

Konqueror

Konqueror web browser is developed by KDE. Konqueror is one of the lesser-known open-source web browsers that’s been built on top of KHTML. Konqueror has been built for any kind of file previewing and file management. Konqueror makes use of KHTML or KDEWebKit rendering engines. File management is done on ftp and sftp servers using Dolphin’s features including service menus, version-control, and the basic UI. It has a full-featured FTP client. So, you can split views to show remote and local folders and previews on the same window.

For previewing files, the Konqueror browser has in-built embedded applications, such as Gwenview for pictures, Okular and Calligra used for documents, KTextEditor for text-files, etc. You can use its various plugins, such as Service-menus, KPart for AdBlocking, KIO to access files, and others.

The international KDE community does the maintenance of the Konqueror browser. 

GNOME Web

GNOME Web comes next in this list of free and open-source web browsers made for Linux. It’s a clean browser that features first-class GNOME and Pantheon desktop integrations. It also includes a built-in adblocker and Intelligent Tracking Prevention. It primarily follows GNOME’s design philosophy. So, there’s no wasted space or useless widgets.

Despite being a GNOME component, the GNOME Web browser is independent of any GNOME components. The GNOME Web is built on top of the WebKit rendering engine. You can use Flatpak to install Epiphany because Flatpak is the most reliable application distribution mechanism used for Linux. Elementary OS and Bodhi Linux use GNOME Web as their default web browser. Did you know GNOME Web browser’s codename is Epiphany? Why Epiphany? Well, this means a sudden perception or manifestation of the meaning of something. Let’s move on towards our next open-source browser.

12-03-2022

11:16

Announcement of LibreOffice 7.2.6 Community [Press Releases Archives - The Document Foundation Blog]

Berlin, March 10, 2022 – LibreOffice 7.2.6 Community, the sixth minor release of the LibreOffice 7.2 family, targeted at desktop productivity, is available from from the download page.

End user support is provided by volunteers via email and online resources: community support. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help us to make all of these resources available.

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners, with long-term support options, professional assistance, custom features and Service Level Agreements: LibreOffice Business.

LibreOffice 7.2.6’s changelog pages are available on TDF’s wiki: RC1 and RC2.

LibreOffice Community and the LibreOffice Enterprise family of products are based on the LibreOffice Technology platform, the result of years of development efforts with the objective of providing a state of the art office suite not only for the desktop but also for mobile and the cloud.

LibreOffice Technology based products for Android and iOS are listed here, while for App Stores and ChromeOS are listed on this page.

LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools on our donate page.

LibreOffice 7.2.6 is built with document conversion libraries from the: Document Liberation Project.

03-03-2022

21:51

LibreOffice 7.3.1 Community available for download [Press Releases Archives - The Document Foundation Blog]

Berlin, March 3, 2022 – LibreOffice 7.3.1 Community, the first minor release of the LibreOffice 7.3 family, targeted at technology enthusiasts and power users, is available for download from https://www.libreoffice.org/download/. This version provides a solution to several LibreOffice 7.3 bugs, including the Auto Calculate regression on Calc, the crashes running Calc when lacking AVX instructions and the crashes related to the Skia graphic engine on macOS.

The LibreOffice 7.3 family offers the highest level of compatibility in the office suite market segment, starting with native support for the OpenDocument Format (ODF) – beating proprietary formats in the areas of security and robustness – to superior support for DOCX, XLSX and PPTX files.

Microsoft files are still based on the proprietary format deprecated by ISO in 2008, which is artificially complex, and not on the ISO approved standard. This lack of respect for the ISO standard format may create issues to LibreOffice, and is a huge obstacle for transparent interoperability.

LibreOffice for enterprise deployments

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners, with long-term support options, professional assistance, custom features and Service Level Agreements: https://www.libreoffice.org/download/libreoffice-in-business/.

LibreOffice Community and the LibreOffice Enterprise family of products are based on the LibreOffice Technology platform, the result of years of development efforts with the objective of providing a state of the art office suite not only for the desktop but also for mobile and the cloud.

Products based on LibreOffice Technology are available for major desktop operating systems (Windows, macOS, Linux and Chrome OS), mobile platforms (Android and iOS) and the cloud. They may have a different name, according to each company brand strategy, but they share the same LibreOffice unique advantages, robustness and flexibility.

Availability of LibreOffice 7.3.1 Community

LibreOffice 7.3.1 Community represents the bleeding edge in term of features for open source office suites. For users whose main objective is personal productivity and therefore prefer a release that has undergone more testing and bug fixing over the new features, The Document Foundation provides LibreOffice 7.2.5.

LibreOffice 7.3.1 change log pages are available on TDF’s wiki: https://wiki.documentfoundation.org/Releases/7.3.1/RC1 (changed in RC1), https://wiki.documentfoundation.org/Releases/7.3.1/RC2 (changed in RC2) and https://wiki.documentfoundation.org/Releases/7.3.1/RC3 (changed in RC3).

LibreOffice Technology based products for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/, while for App Stores and ChromeOS are listed here: https://www.libreoffice.org/download/libreoffice-from-microsoft-and-mac-app-stores/

LibreOffice individual users are assisted by a global community of volunteers: https://www.libreoffice.org/get-help/community-support/. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help the project to make all of these resources available.

LibreOffice users are invited to join the community at https://ask.libreoffice.org, where they can get and provide user-to-user support. People willing to contribute their time and professional skills to the project can visit the dedicated website at https://whatcanidoforlibreoffice.org.

LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools at https://www.libreoffice.org/donate.

LibreOffice 7.3.1 is built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org.

02-02-2022

17:17

LibreOffice 7.3 Community is better than ever at interoperability [Press Releases Archives - The Document Foundation Blog]

In addition to the majority of code commits being focused on interoperability with Microsoft’s proprietary file formats, there is a wealth of new features targeted at users migrating from Office, to simplify the transition

Berlin, February 2, 2022 – LibreOffice 7.3 Community, the new major release of the volunteer-supported free office suite for desktop productivity, is available from https://www.libreoffice.org/download. Based on the LibreOffice Technology platform for personal productivity on desktop, mobile and cloud, it provides a large number of improvements targeted at users migrating from Microsoft Office to LibreOffice, or exchanging documents between the two office suites.

There are three different kinds of interoperability improvements:

  • Development of new features, such as the new handling of change tracking in tables and when text is moved, which have a positive impact on interoperability with Microsoft Office documents.
  • Performance improvements when opening large DOCX and XLSX/XLSM files, improved rendering speed of some complex documents, and new rendering speed improvements when using the Skia back-end introduced with LibreOffice 7.1.
  • Improvements to import/export filters: DOC (greatly improved list/numbering import); DOCX (greatly improved list/numbering import; hyperlinks attached to shapes are now imported/exported; fix permission for editing; track change of paragraph style); XLSX (decreased row height for Office XLSX files; cell indent doesn’t increase on each save; fix permission for editing; better support of XLSX charts); and PPTX (fixed interactions and hyperlinks on images; fix the incorrect import/export of PPTX slide footers; fix hyperlinks on images and shapes; transparent shadow for tables).

In addition, LibreOffice’s Help has also been improved to support all users, with a particular attention for those switching from Microsoft Office: search results – which are now using FlexSearch instead of Fuzzysort for indexing – are focused on the user’s current module, while Help pages for Calc Functions have been reviewed for accuracy and completeness and linked to Calc Function wiki pages, while Help pages for the ScriptForge scripting library have been updated.

ScriptForge libraries, which make it easier to develop macros, have also been extended with various features: the addition of a new Chart service, to define charts stored in Calc sheets; a new PopupMenu service, to describe the menu to be displayed after a mouse event; an extensive option for Printer Management, with a list of fonts and printers; and a feature to export documents to PDF with full management of PDF options. The whole set of services is available with identical syntax and behavior for Python and Basic.

LibreOffice offers the highest level of compatibility in the office suite market segment, starting with native support for the OpenDocument Format (ODF) – beating proprietary formats in the areas of security and robustness – to superior support for DOCX, XLSX and PPTX files. In addition, LibreOffice provides filters for a large number of legacy document formats, to return ownership and control to users.

Microsoft files are still based on the proprietary format deprecated by ISO in 2008, and not on the ISO approved standard, so they hide a large amount of artificial complexity. This causes handling issues with LibreOffice, which defaults to a true open standard format (the OpenDocument Format).

LibreOffice 7.3 is available natively for Apple Silicon, a series of processors designed by Apple and based on the ARM architecture. The option has been added to the default ones available on the download page.

A video summarizing the top new features in LibreOffice 7.3 Community is available on YouTube: https://www.youtube.com/watch?v=Raw0LIxyoRU and PeerTube: https://peertube.opencloud.lu/w/iTavJYSS9YYvnW43anFLeC.

A description of all new features is available in the Release Notes [1]

Contributors to LibreOffice 7.3 Community

LibreOffice 7.3 Community’s new features have been developed by 147 contributors: 69% of code commits are from the 49 developers employed by three companies sitting in TDF’s Advisory Board – Collabora, Red Hat and allotropia – or other organizations (including The Document Foundation), and 31% are from 98 individual volunteers.

In addition, 641 volunteers have provided localizations in 155 languages. LibreOffice 7.3 Community is released in 120 different language versions, more than any other free or proprietary software, and as such can be used in the native language (L1) by over 5.4 billion people worldwide. In addition, over 2.3 billion people speak one of those 120 languages as their second language (L2).

LibreOffice for Enterprises

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners – for desktop, mobile and cloud – with a large number of dedicated value-added features. These include long-term support options, professional assistance, personalized developments and other benefits such as SLA (Service Level Agreements): https://www.libreoffice.org/download/libreoffice-in-business/.

Despite this recommendation, an increasing number of enterprises are using the version supported by volunteers, instead of the version optimized for their needs and supported by the different ecosystem companies.

Over time, this represents a problem for the sustainability of the LibreOffice project, because it slows down the evolution of the project. In fact, every line of code developed by ecosystem companies for their enterprise customers is shared with the community on the master code repository, and improves the LibreOffice Technology platform.

Products based on LibreOffice Technology are available for major desktop operating systems (Windows, macOS, Linux and Chrome OS), for mobile platforms (Android and iOS), and for the cloud. Slowing down the development of the platform is hurting users at every level, and the LibreOffice project may fall short of its expectations and possibilities.

Migrations to LibreOffice

The Document Foundation has developed a Migration Protocol to support enterprises moving from proprietary office suites to LibreOffice, which is based on the deployment of an LTS version from the LibreOffice Enterprise family, plus migration consultancy and training sourced from certified professionals who offer value-added solutions in line with proprietary offerings. Reference: https://www.libreoffice.org/get-help/professional-support/.

In fact, LibreOffice – thanks to its mature codebase, rich feature set, strong support for open standards, excellent compatibility and LTS options from certified partners – is the ideal solution for businesses that want to regain control of their data and free themselves from vendor lock-in.

Availability of LibreOffice 7.3 Community

LibreOffice 7.3 Community is immediately available from the following link: https://www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 and Apple macOS 10.12.

LibreOffice Technology-based products for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/, while for App Stores and ChromeOS are listed here: https://www.libreoffice.org/download/libreoffice-from-microsoft-and-mac-app-stores/

For users whose main objective is personal productivity, and therefore prefer a release that has undergone more testing and bug fixing over the new features, The Document Foundation maintains the LibreOffice 7.2 family, which includes some months of back-ported fixes. The current version is LibreOffice 7.2.5.

The Document Foundation does not provide technical support for users, although they can get it from volunteers on user mailing lists and the Ask LibreOffice website: https://ask.libreoffice.org
LibreOffice users, free software advocates and community members can support The Document Foundation with a donation at https://www.libreoffice.org/donate.

LibreOffice 7.3 is built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org

[1] Release Notes: https://wiki.documentfoundation.org/ReleaseNotes/7.3

Press Kit

Download link: https://nextcloud.documentfoundation.org/s/MnZEgpr86TzwBJi

06-01-2022

17:54

LibreOffice 7.2.5 is now available [Press Releases Archives - The Document Foundation Blog]

Berlin, January 6, 2022 – The Document Foundation announces LibreOffice 7.2.5 Community, the fifth minor release of the LibreOffice 7.2 family, which is available on the download page.

This version includes 90 bug fixes and improvements to document compatibility. The changelogs provide details of the fixes: changes in RC1 and changes in RC2.

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners, with long-term support options, professional assistance, custom features and Service Level Agreements: LibreOffice in Business.

LibreOffice Community and the LibreOffice Enterprise family of products are based on the LibreOffice Technology platform, the result of years of development efforts with the objective of providing a state of the art office suite, not only for the desktop but also for mobile and the cloud.

LibreOffice Technology-based products for Android and iOS are listed on this page, while products for App Stores and ChromeOS are listed here.

Get help, and support us

Individual users are assisted by a global community of volunteers, via our community help pages. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help us to make all of these resources available.

LibreOffice users are invited to join the community at Ask LibreOffice, where they can get and provide user-to-user support. People willing to contribute their time and professional skills to the project can visit the dedicated website at What Can I Do For LibreOffice.

LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card, bank transfer, cryptocurrencies and other methods on this page.

LibreOffice 7.2.5 is built with document conversion libraries from the Document Liberation Project.

Download LibreOffice 7.2.5

06-12-2021

15:31

LibreOffice 7.2.4 Community and LibreOffice 7.1.8 Community available ahead of schedule to provide an important security fix [Press Releases Archives - The Document Foundation Blog]

Berlin, December 6, 2021 – The Document Foundation announces LibreOffice 7.2.4 Community and LibreOffice 7.1.8 Community to provide a key security fix. Releases are immediately available from https://www.libreoffice.org/download/, and all LibreOffice users are recommended to update their installation. Both new version include the fixed NSS 3.73.0 cryptographic library, to solve CVE-2021-43527 (the nss secfix is the only change compared to the previous version).

LibreOffice 7.2.4 Community is also available for Apple Silicon from this link: https://download.documentfoundation.org/libreoffice/stable/7.2.4/mac/aarch64/.

LibreOffice Community is based on the LibreOffice Technology platform, the result of years of development efforts with the objective of providing a state of the art office suite not only for the desktop but also for mobile and the cloud.

LibreOffice individual users are assisted by a global community of volunteers: https://www.libreoffice.org/get-help/community-support/. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help us to make all of these resources available.

LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools at https://www.libreoffice.org/donate.

25-11-2021

18:33

The Document Foundation announces LibreOffice 7.2.3 Community [Press Releases Archives - The Document Foundation Blog]

Berlin, November 25, 2021 – The Document Foundation announces LibreOffice 7.2.3 Community, the third minor release of the LibreOffice 7.2 family targeted at technology enthusiasts and power users, which is available for download from https://www.libreoffice.org/download/. This version includes 112 bug fixes and improvements to document compatibility.

LibreOffice 7.2.3 Community is also available for Apple Silicon from this link: https://download.documentfoundation.org/libreoffice/stable/7.2.3/mac/aarch64/.

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners, with long-term support options, professional assistance, custom features and Service Level Agreements: https://www.libreoffice.org/download/libreoffice-in-business/.

LibreOffice Community and the LibreOffice Enterprise family of products are based on the LibreOffice Technology platform, the result of years of development efforts with the objective of providing a state of the art office suite not only for the desktop but also for mobile and the cloud.

Availability of LibreOffice 7.2.3 Community

LibreOffice 7.2.3 Community represents the bleeding edge in term of features for open source office suites. For users whose main objective is personal productivity and therefore prefer a release that has undergone more testing and bug fixing over the new features, The Document Foundation provides LibreOffice 7.1.7.

LibreOffice 7.2.3 change log pages are available on TDF’s wiki: https://wiki.documentfoundation.org/Releases/7.2.3/RC1 (changed in RC1) and https://wiki.documentfoundation.org/Releases/7.2.3/RC2 (changed in RC2).

LibreOffice Technology based products for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/, while for App Stores and ChromeOS are listed here: https://www.libreoffice.org/download/libreoffice-from-microsoft-and-mac-app-stores/

LibreOffice individual users are assisted by a global community of volunteers: https://www.libreoffice.org/get-help/community-support/. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help us to make all of these resources available.

LibreOffice users are invited to join the community at https://ask.libreoffice.org, where they can get and provide user-to-user support. People willing to contribute their time and professional skills to the project can visit the dedicated website at https://whatcanidoforlibreoffice.org.

LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools at https://www.libreoffice.org/donate.

LibreOffice 7.2.3 is built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org.

11-11-2021

12:02

Two Super Fast App Launchers for Ubuntu 19.04 [Tech Drive-in]

During the transition period, when GNOME Shell and Unity were pretty rough around the edges and slow to respond, 3rd party app launchers were a big deal. Overtime the newer desktop environments improved and became fast, reliable and predictable, reducing the need for a alternate app launchers.


As a result, many third-party app launchers have either slowed down development or simply seized to exist. Ulauncher seems to be the only one to have bucked the trend so far. Synpase and Kupfer on the other hand, though old and not as actively developed anymore, still pack a punch. Since Kupfer is too old school, we'll only be discussing Synapse and Ulauncher here.

Synapse

I still remember the excitement when I first reviewed Synapse more than 8 years ago. Back then, Synapse was something very unique to Linux and Ubuntu, and it still is in many ways. Though Synapse is not an active project that it used to be, the launcher still works great even in brand new Ubuntu 19.04.

synapse ubuntu 19.04
 
No need to meddle with PPAs and DEBs, Synapse is available in Ubuntu Software Center.

ulauncher ubuntu 19.04 disco
 
CLICK HERE to directly find and install Synapse from Ubuntu Software Center, or simply search 'Synapse' in USC. Launch the app afterwards. Once launched, you can trigger Synapse with Ctrl+Space keyboard shortcut.

Ulauncher

The new kid in the block apparently. But new doesn't mean it is lacking in any way. What makes Ulauncher quite unique are its extensions. And there is plenty to choose from.

ulauncher ubuntu 19.04

From an extension that lets you control your Spotify desktop app, to generic unit converters or simply timers, Ulauncher extesions has got you covered.

Let's install the app first. Download the DEB file for Debian/Ubuntu users and double-click the downloaded file to install it. To complete the installation via Terminal instead, do this:

OR

sudo dpkg -i ~/Downloads/ulauncher_4.3.2.r8_all.deb

Change filename/location if they are different in your case. And if the command reports dependency errors, make a force install using the command below.

sudo apt-get install -f

Done. Post install, launch the app from your app-list and you're good to go. Once started, Ulauncher will sit in your system tray by default. And just like Synapse, Ctrl+Space will trigger Ulauncher.


Installing extensions in Ulauncher is pretty straight forward too.


Find the extensions you want from Ulauncher Extensions page. Trigger a Ulauncher instance with Ctrl+Space and go to Settings > Extensions > Add extension. Provide the URL from the extension page and let the app do the rest.

A Standalone Video Player for Netflix, YouTube, Twitch on Ubuntu 19.04 [Tech Drive-in]

Snap apps are a godsend. ElectronPlayer is an Electron based app available on Snapstore that doubles up as a standalone media player for video streaming services such as Netflix, YouTube, Twitch, Floatplane etc.

And it works great on Ubuntu 19.04 "disco dingo". From what we've tested, Netflix works like a charm, so does YouTube. ElectronPlayer also has a picture-in-picture mode that let it run above desktop and full screen applications.

netflix player ubuntu 19.04

For me, this is great because I can free-up tabs on my Firefox window which are almost never clutter-free.
OR

Use the command below to install ElectronPlayer directly from Snapstore. Open Terminal (Ctrl+Alt+t) and copy:

sudo snap install electronplayer

Press ENTER and give password when asked.

After the process is complete, search for ElectronPlayer in you App list. Sign in to your favorite video streaming services and you are good to go. Let us know your feedback in the comments.

Howto Upgrade to Ubuntu 19.04 from Ubuntu 18.10, Ubuntu 18.04 LTS [Tech Drive-in]

As most of you should know already, Ubuntu 19.04 "disco dingo" has been released. A lot of things have changed, see our comprehensive list of improvements in Ubuntu 19.04. Though it is not really necessary to make the jump, I'm sure many here would prefer to have the latest and greatest from Ubuntu. Here's how you upgrade to Ubuntu 19.04 from Ubuntu 18.10 and Ubuntu 18.04.

Upgrading to Ubuntu 19.04 from Ubuntu 18.04 LTS is tricky. There is no way you can make the jump from Ubuntu 18.04 LTS directly to Ubuntu 19.04. For that, you need to upgrade to Ubuntu 18.10 first. Pretty disappointing, I know. But when upgrading an entire OS, you can't be too careful.

And the process itself is not as tedious or time consuming à la Windows. And also unlike Windows, the upgrades are not forced upon you while you're in middle of something.

how to upgrade to ubuntu 19.04

If you wonder how the dock in the above screenshot rest at the bottom of Ubuntu desktop, it's called dash-to-dock GNOME Shell extension. That and more Ubuntu 19.04 tips and tricks here.

Upgrade to Ubuntu 19.04 from Ubuntu 18.10

Disclaimer: PLEASE backup your critical data before starting the upgrade process.

Let's start with the assumption that you're on Ubuntu 18.04 LTS.

After running the upgrade from Ubuntu 18.04 LTS from Ubuntu 18.10, the prompt will ask for a full system reboot. Please do that, and make sure everything is running smoothly afterwards. Now you have clean new Ubuntu 18.10 up and running. Let's begin the Ubuntu 19.04 upgrade process.
  • Make sure your laptop is plugged-in, this is going to take time. Stable Internet connection is a must too. 
  • Run your Software Updater app, and install all the updates available. 
how to upgrade to ubuntu 19.04 from ubuntu 18.10

  • Post the update, you should be prompted with an "Ubuntu 19.04 is available" window. It will guide you through the required steps without much hassle. 
  • If not, fire up Software & Updates app and check for updates. 
  • If both these didn't work in your case, there's always the commandline option to make the force upgarde. Open Terminal app (keyboard shortcut: CTRL+ALT+T), and run the command below.
sudo do-release-upgrade -d
  • Type the password when prompted. Don't let the simplicity of the command fool you, this is just the start of a long and complicated process. do-release command will check for available upgrades and then give you an estimated time and bandwidth required to complete the process. 
  • Read the instructions carefully and proceed. The process only takes about an hour or less for me. It entirely depends on your internet speed and system resources.
So, how did it go? Was the upgrade process smooth as it should be? And what do you think about new Ubuntu 19.04 "disco dingo"? Let us know in the comments.

15 Things I Did Post Ubuntu 19.04 Installation [Tech Drive-in]

Ubuntu 19.04, codenamed "Disco Dingo", has been released (and upgrading is easier than you think). I've been on Ubuntu 19.04 since its first Alpha, and this has been a rock solid release as far I'm concerned. Changes in Ubuntu 19.04 are more evolutionary though, but availability of the latest Linux Kernel version 5.0 is significant.

ubuntu 19.04 things to do after install

Unity is long gone and Ubuntu 19.04 is indistinguishably GNOME 3.x now, which is not necessarily a bad thing. Yes, I know, there are many who still swear by the simplicity of Unity desktop. But I'm an outlier here, I liked both Unity and GNOME 3.x even in their very early avatars. When I wrote this review of GNOME Shell desktop almost 8 years ago, I knew it was destined for greatness. Ubuntu 19.04 "Disco Dingo" runs GNOME 3.32.0.


We'll discuss more about GNOME 3.x and Ubuntu 19.04 in the official review. Let's get down to brass tacks. A step-by-step guide into things I did after installing Ubuntu 19.04 "Disco Dingo". 

1. Make sure your system is up-to-date

Do a full system update. Fire up your Software Updater and check for updates.

how to update ubuntu 19.04

OR
via Terminal, this is my preferred way to update Ubuntu. Just one command.

sudo apt update && sudo apt dist-upgrade

Enter password when prompted and let the system do the rest.

2. Install GNOME Tweaks

GNOME Tweaks is non-negotiable.

things to do after installing ubuntu 19.04

GNOME Tweaks is an app the lets you tweak little things in GNOME based OSes that are otherwise hidden behind menus. If you are on Ubuntu 19.04, Tweaks is a must. Honestly, I don't remember if it was installed as a default. But here you install it anyway, Apt-URL will prompt you if the app already exists.

Search for Gnome Tweaks in Ubuntu Software Center. OR simply CLICK HERE to go straight to the app in Software Center. OR even better, copy-paste this command in Terminal (keyboard shortcut: CTRL+ALT+T).

sudo apt install gnome-tweaks

3. Enable MP3/MP4/AVI Playback, Adobe Flash etc.

You do have an option to install most of the 'restricted-extras' while installing the OS itself now, but if you are not-sure you've ticked all the right boxes, just run the following command in Terminal.

sudo apt install ubuntu-restricted-extras

OR

You can install it straight from the Ubuntu Software Center by CLICKING HERE.

4. Display Date/Battery Percentage on Top Panel  

The screenshot, I hope, is self explanatory.

things to do after installing ubuntu 19.04

If you have GNOME Tweaks installed, this is easily done. Open GNOME tweaks, goto 'Top Bar' sidemenu and enable/disable what you need.

5. Enable 'Click to Minimize' on Ubuntu Dock

Honestly, I don't have a clue why this is disabled by default. You intuitively expect the apps shortcuts on Ubuntu dock to 'minimize' when you click on it (at least I do).

In fact, the feature is already there, all you need to do is to switch it ON. Do this is Terminal.

gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize'

That's it. Now if you didn't find the 'click to minimize' feature useful, you can always revert Dock settings back to its original state, by copy-pasting the following command in Terminal app.

gsettings reset org.gnome.shell.extensions.dash-to-dock click-action

6. Pin/Unpin Apps from Launcher

There are a bunch of apps that are pinned to your Ubuntu launcher by default.

things to do after ubuntu 19.04
 
For example, I almost never use the 'Help' app or the 'Amazon' shortcut preloaded on launcher. But I would prefer a shortcut to Terminal app instead. Right-click on your preferred app on the launcher, and add-to/remove-from favorites as you please.

7. Enable GNOME Shell Exetensions Support

Extensions are an integral part of GNOME desktop.

It's a real shame that one has to go through all these for such a basic yet important feature. From the default Firefox browser, when you visit GNOME Extensions page, you will notice the warning message on top describing the unavailability of Extensions support.
Now for the second part, you need to install the host connector on Ubuntu.
sudo apt install chrome-gnome-shell
  • Done. Don't mind the "chrome" in 'chrome-gnome-shell', it works with all major browsers, provided you've the correct browser add-on installed. 
  • You can now visit GNOME Extensions page and install extensions as you wish with ease. (if it didn't work immediately, a system restart will clear things up). 
Extensions are such an integral part of GNOME Desktop experience, can't understand why this is not a system default in Ubuntu 19.04. Hope future releases of Ubuntu will have this figured out.

8. My Favourite 5 GNOME Shell Extensions for Ubuntu 19.04


9. Remove Trash Icon from Desktop

Annoyed by the permanent presence of Home and Trash icons in the desktop? You are not alone. Luckily, there's an extension for that!
Done. Now, access the settings and enable/disable icons as you please. 


Extension settings can be accessed directly from the extension home page (notice the small wrench icon near the ON/OFF toggle). OR you can use the Extensions addon like in the screenshot above.

10. Enable/Disable Two Finger Scrolling

As you must've noticed, two-finger scrolling is a system default for sometime now. 

things to do after installing ubuntu cosmic
 
One of my laptops act strangely when two-finger scrolling is on. You can easily disable two-finger scrolling and enable old school edge-scrolling in 'Settings'.  Settings > Mouse and Touchpad

Quicktip: You can go straight to submenus by simply searching for it in GNOME's universal search bar.

ubuntu 19.04 disco

Take for example the screenshot above, where I triggered the GNOME menu by hitting Super(Windows) key, and simply searched for 'mouse' settings. The first result will take me directly to the 'Settings' submenu for 'Mouse and Touchpad' that we saw earlier. Easy right? More examples will follow.

11. Nightlight Mode ON

When you're glued to your laptop/PC screen for a large amount of time everyday, it is advisable that you enable the automatic nightlight mode for the sake of your eyes. Be it the laptop or my phone, this has become an essential feature. The sight of a LED display without nightlight ON during lowlight conditions immediately gives me a headache these days. Easily one of my favourite in-built features on GNOME.


Settings > Devices > Display > Night Light ON/OFF

things to do after installing ubuntu 19.04

OR as before, Hit superkey > search for 'night light'. It will take you straight to the submenu under Devices > Display. Guess you wouldn't need anymore examples on that.

things to do after installing ubuntu 19.04

12. Privacy on Ubuntu 19.04

Guess I don't need to lecture you on the importance of privacy in the post-PRISM era.

ubuntu 19.04 privacy

Ubuntu remembers your usage & history to recommend you frequently used apps and such. And this is never shared over the network. But if you're not comfortable with this, you can always disable and delete your usage history on Ubuntu. Settings > Privacy > Usage & History 

13. Perhaps a New Look & Feel?

As you might have noticed, I'm not using the default Ubuntu theme here.

themes ubuntu 19.04

Right now I'm using System 76's Pop OS GTK theme and icon sets. They look pretty neat I think. Just three commands to install it in your Ubuntu 19.04.

sudo add-apt-repository ppa:system76/pop
sudo apt-get update
sudo apt install pop-icon-theme pop-gtk-theme pop-gnome-shell-theme
sudo apt install pop-wallpapers

Execute last command if you want Pop OS wallpapers as well. To enable the newly installed theme and icon sets, launch GNOME Tweaks > Appearance (see screenshot). I will be making separate posts on themes, icon sets and GNOME shell extensions. So stay subscribed. 

14. Disable Error Reporting

If you find the "application closed unexpectedly" popups annoying, and would like to disable error reporting altogether, this is what you need to do.


Settings > Privacy > Problem Reporting and switch it off. 

15. Liberate vertical space on Firefox by disabling Title Bar

This is not an Ubuntu specific tweak.


Firefox > Settings > Customize. Notice the "Title Bar" at the bottom left? Untick to disable.

Follow us on Facebook, and Twitter.

Ubuntu 19.04 Gets Newer and Better Wallpapers [Tech Drive-in]

A "Disco Dingo" themed wallpaper was already there. But the latest update bring a bunch of new wallpapers as system defaults on Ubuntu 19.04.

ubuntu 19.04 wallpaper

Pretty right? Here's the older one for comparison.

ubuntu 19.04 updates

The newer wallpaper is definitely cleaner, more professional looking with better colors. I won't bother tinkering with wallpapers anymore, the new default on Ubuntu 19.04 is just perfect.

ubuntu 19.04 wallpapers

Too funky for my taste. But I'm sure there will be many who will prefer this darker, edgier, wallpaper over the others. As we said earlier, the new "disco dingo" mascot calls for infinite wallpaper variations.


Apart from theme and artwork updates, Ubuntu 19.04 has the latest Linux Kernel version 5.0 (5.0.0.8 to be precise). You can read more about Ubuntu 19.04 features and updates here.

Ubuntu 19.04 hit beta a few days ago. Though it is a pretty stable release already for a beta, I'd recommend to wait for another 15 days or so until the final release. If all you care are the wallpapers, you can download the new Ubuntu 19.04 wallpapers here. It's a DEB file, just do a double click post download.

LinuxBoot: A Linux Foundation Project to replace UEFI Components [Tech Drive-in]

UEFI has a pretty bad reputation among many in the Linux community. UEFI unnecessarily complicated Linux installation and distro-hopping in Windows pre-installed machines, for example. Linux Boot project by Linux Foundation aims to replace some firmware functionality like the UEFI DXE phase with Linux components.

What is UEFI?
UEFI is a standard or a specification that replaced legacy BIOS firmware, which was the industry standard for decades. Essentially, UEFI defines the software components between operating system and platform firmware.


UEFI boot has three phases: SEC, PEI and DXE. Driver eXecution Environment or DXE Phase in short: this is where UEFI system loads drivers for configured devices. LinuxBoot will replaces specific firmware functionality like the UEFI DXE phase with a Linux kernel and runtime.

LinuxBoot and the Future of System Startup
"Firmware has always had a simple purpose: to boot the OS. Achieving that has become much more difficult due to increasing complexity of both hardware and deployment. Firmware often must set up many components in the system, interface with more varieties of boot media, including high-speed storage and networking interfaces, and support advanced protocols and security features."  writes Linux Foundation.

linuxboot uefi replacement

LinuxBoot will replace this slow and often error-prone code with a Linux Kernel. This alone should significantly improve system startup performance.

On top of that, LinuxBoot intends to achieve increased boot reliability and boot-time performance by removing unnecessary code and by using reliable Linux drivers instead of lightly tested firmware drivers. LinuxBoot claims that these improvements could potentially help make the system startup process as much as 20 times faster.

In fact, this "Linux to boot Linux" technique has been fairly common place in supercomputers, consumer electronics, and military applications, for decades. LinuxBoot looks to take this proven technique and improve on it so that it can be deployed and used more widely by individual users and companies.

Current Status
LinuxBoot is not as obscure or far-fetched as, say, lowRISC (open-source, Linux capable, SoC) or even OpenPilot. At FOSDEM 2019 summit, Facebook engineers revealed that their company is actively integrating and finetuning LinuxBoot to their needs for freeing hardware down to the lowest levels.


Facebook and Google are deeply involved in LinuxBoot project. Being large data companies, where even small improvements in system startup speed and reliability can bring major advantages, their involvement is not a surprise. To put this in perspective, a large data center run by Google or Facebook can have tens of thousands of servers. Other companies involved include Horizon Computing, Two Sigma and 9elements Cyber Security.

Look up Uber Time, Price Estimates on Terminal with Uber CLI [Tech Drive-in]

The worldwide phenomenon that is Uber needs no introduction. Uber is an immensely popular ride sharing, ride hailing, company that is valued in billions. Uber is so disruptive and controversial that many cities and even countries are putting up barriers to protect the interests of local taxi drivers.

Enough about Uber as a company. To those among you who regularly use Uber app for booking a cab, Uber CLI could be a useful companion.


Uber CLI can be a great tool for the easily distracted. This unique command line application allows you to look up Uber cab's time and price estimates without ever taking your eyes off the laptop screen.

Install Uber-CLI using NPM

You need to have NPM first to install Uber-CLI on Ubuntu. npm, short for Node.js package manager, is a package manager for the JavaScript programming language. It is the default package manager for the JavaScript runtime environment Node.js. npm has a command line based client and its own repository of packages.

This is how to install npm on Ubuntu 19.04, and Ubuntu 18.10. And thereafter, using npm, install Uber-CLI. Fire up the Terminal and run the following.

sudo apt update
sudo apt install nodejs npm
npm install uber-cli -g

And you're done. Uber CLI is a command line based application, here are a few examples of how it works in Terminal. Also, since Uber is not available where I live, I couldn't vouch for its accuracy.


Uber-CLI has just two use cases.
uber time 'pickup address here'
uber price -s 'start address' -e 'end address'
Easy right? I did some testing with places and addresses I'm familiar with, where Uber cabs are fairly common. And I found the results to be fairly accurate. Do test and leave feedback. Uber CLI github page for more info.

UBports Installer for Ubuntu Touch is just too good! [Tech Drive-in]

Even as someone who bought into the Ubuntu Touch hype very early, I was not expecting much from UBports to be honest. But to my pleasent surprise, UBports Installer turned my 4 year old BQ Aquaris E4.5 Ubuntu Edition hardware into a slick, clean, and usable phone again.



ubuntu phone 16.04
UBports Installer and Ubuntu Touch
As many of you know already, Ubuntu Touch was Canonical's failed attempt to deliver a competent mobile operating system based on its desktop version. The first Ubuntu Touch installed smartphone was released in 2015 by BQ, a Spanish smartphone manufacturer. And in April 2016, the world's first Ubuntu Touch based tablet, the BQ Aquaris M10 Ubuntu Edition, was released.

Though initial response was  quite promising, Ubuntu Touch failed to make a significant enough splash in the smartphone space. In fact, Ubuntu Touch was not alone, many other mobile OS projects like Firefox OS or even Samsung owned Tizen OS for that matter failed to capture a sizable market-share from Android/iOS duopoly.

To the disappointment of Ubuntu enthusiasts, Mark Shuttleworth announced the termination of Ubuntu Touch development in April, 2017.


Rise of UBports and revival of Ubuntu Touch Project
ubuntu touch 16.04For all its inadequacies, Ubuntu Touch was one unique OS. It looked and felt different from most other mobile operating systems. And Ubuntu Touch enthusiasts was not ready to give up on it so easily. Enter UBports.

UBports turned Ubuntu Touch into a community-driven project. Passionate people from around the world now contribute to the development of Ubuntu Touch. In August 2018, UBPorts released its OTA-4, upgrading the Ubuntu Touch's base from the Canonical's starting Ubuntu 15.04 (Vivid Vervet) to the nearest, current long-term support version Ubuntu 16.04 LTS.

They actively test the OS on a number of legacy smartphone hardware and help people install Ubuntu Touch on their smartphones using an incredibly capable, cross-platform, installer.

Ubuntu Touch Installer on Ubuntu 19.04
Though I knew about UBports project before, I was never motivated enough to try the new OS on my Aquaris E4.5, until yesterday. By sheer stroke of luck, I stumbled upon UBports Installer in Ubuntu Software Center. I was curious to find out if it really worked as it claimed on the page.

ubuntu touch installer on ubuntu 19.04

I fired up the app on my Ubuntu 19.04 and plugged in my Aquaris E4.5. Voila! the installer detected my phone in a jiffy. Since there wasn't much data on my BQ, I proceeded with Ubuntu Touch installation.

ubports ubuntu touch installer

The instructions were pretty straight forward and it took probably 15 minutes to download, restart, and install, 16.04 LTS based Ubuntu Touch on my 4 year old hardware.

ubuntu touch ubports

In my experience, even flashing an Android was never this easy! My Ubuntu phone is usable again without all the unnecessary bloat that made it clunky. This post is a tribute to the UBports community for the amazing work they've been doing with Ubuntu Touch. Here's also a list of smartphone hardware that can run Ubuntu Touch.

Retro Terminal that Emulates Old CRT Display (Ubuntu 18.10, 18.04 PPA) [Tech Drive-in]

We've featured cool-retro-term before. It is a wonderful little terminal emulator app on Ubuntu (and Linux) that adorns this cool retro look of the old CRT displays.

Let the pictures speak for themselves.

retro terminal ubuntu ppa

Pretty cool right? Not only does it look cool, it functions just like a normal Terminal app. You don't lose out on any features normally associated with a regular Terminal emulator. cool-retro-term comes with a bunch of themes and customisations that takes its retro cool appeal a few notches higher.

cool-old-term retro terminal ubuntu linux

Enough now, let's find out how you install this retro looking Terminal emulator on Ubuntu 18.04 LTS, and Ubuntu 18.10. Fire up your Terminal app, and run these commands one after the other.

sudo add-apt-repository ppa:vantuz/cool-retro-term
sudo apt update
sudo apt install cool-retro-term

Done. The above PPA supports Ubuntu Artful, Bionic and Cosmic releases (Ubuntu 17.10, 18.04 LTS, 18.10). cool-retro-term is now installed and ready to go.


Since I don't have Artful or Bionic installations in any of my computers, I couldn't test the PPA on those releases. Do let me know if you faced any issues while installing the app.

And as some of you might have noticed, I'm running cool-retro-term from an AppImage. This is because I'm on Ubuntu 19.04 "disco dingo", and obviously the app doesn't support an unreleased OS (well, duh!).

retro terminal ubuntu ppa

This is how it looks on fullscreen mode. If you are a non-Ubuntu user, you can find various download options here. If you are on Fedora or distros based on it, cool-retro-term is available in the official repositories.

Google's Stadia Cloud Gaming Service, Powered by Linux [Tech Drive-in]

Unless you live under a rock, you must've been inundated with nonstop news about Google's high-octane launch ceremony yesterday where they unveiled the much hyped game streaming platform called Stadia.

Stadia, or Project Stream as it was earlier called, is a cloud gaming service where the games themselves are hosted on Google's servers, while the visual feedback from the game is streamed to the player's device through Google Chrome. If this technology catches on, and if it works just as good as showed in the demos, Stadia could be what the future of gaming might look like.

Stadia, Powered by Linux

It is a fairly common knowledge that Google data centers use Linux rather extensively. So it is not really surprising that Google would use Linux to power its cloud based Stadia gaming service. 

google stadia runs on linux

Stadia's architecture is built on Google data center network which has extensive presence across the planet. With Google Stadia, Google is offering a virtual platform where processing resources can be scaled up to match your gaming needs without the end user ever spending a dime more on hardware.


And since Google data centers mostly runs on Linux, the games on Stadia will run on Linux too, through the Vulkan API. This is great news for gaming on Linux. Even if Stadia doesn't directly result in more games on Linux, it could potentially make gaming a platform agnostic cloud based service, like Netflix.

With Stadia, "the data center is your platform," claims Majd Bakar, head of engineering at Stadia. Stadia is not constrained by limitations of traditional console systems, he adds. Stadia is a "truly flexible, scalable, and modern platform" that takes into account the future requirements of the gaming ecosystem. When launched later this year, Stadia will be able to stream at 4K HDR and 60fps with surround sound.


Watch the full presentation here. Tell us what you think about Stadia in the comments.

Ubuntu 19.04 Updates - 7 Things to Know [Tech Drive-in]

Ubuntu 19.04 is scheduled to arrive in another 30 days has been released. I've been using it for the past week or so, and even as a pre-beta, the OS is pretty stable and not buggy at all. Here are a bunch of things you should know about the yet to be officially released Ubuntu 19.04.

what's new in ubuntu 19.04

1. Codename: "Disco Dingo"

How about that! As most of you know already, Canonical names its semiannual Ubuntu releases using an adjective and an animal with the same first letter (Intrepid Ibex, Feisty Fawn, or Maverick Meerkat, for example, were some of my favourites). And the upcoming Ubuntu 19.04 is codenamed "Disco Dingo", has to be one of the coolest codenames ever for an OS.


2. Ubuntu 19.04 Theme Updates

A new cleaner, crisper looking Ubuntu is coming your way. Can you notice the subtle changes to the default Ubuntu theme in screenshot below? Like the new deep-black top panel and launcher? Very tastefully done.

what's new in ubuntu 19.04

To be sure, this is now looking more and more like vanilla GNOME and less like Unity, which is not a bad thing.

ubuntu 19.04 updates

There are changes to the icons too. That hideous blue Trash icon is gone. Others include a new Update Manager icon, Ubuntu Software Center icon and Settings Icon.

3. Ubuntu 19.04 Official Mascot

GIFs speaks louder that words. Meet the official "Disco Dingo" mascot.



Pretty awesome, right? "Disco Dingo" mascot calls for infinite wallpaper variations.

4. The New Default Wallpaper

The new "Disco Dingo" themed wallpaper is so sweet: very Ubuntu-ish yet unique. A gray scale version of the same wallpaper is a system default too.

ubuntu 19.04 disco dingo features

UPDATE: There's a entire suit of newer and better wallpapers on Ubuntu 19.04!

5. Linux Kernel 5.0 Support

Ubuntu 19.04 "Disco Dingo" will officially support the recently released Linux Kernel version 5.0. Among other things, Linux Kernel 5.0 comes with AMD FreeSync display support which is awesome news to users of high-end AMD Radeon graphics cards.

ubuntu 19.04 features

Also important to note is the added support for Adiantum Data Encryption and Raspberry Pi touchscreens. Apart from that, Kernel 5.0 has regular CPU performance improvements and improved hardware support.

6. Livepatch is ON

Ubuntu 19.04's 'Software and Updates' app has a new default tab called Livepatch. This new feature should ideally help you to apply critical kernel patches without rebooting.

Livepatch may not mean much to a normal user who regularly powerdowns his or her computer, but can be very useful for enterprise users where any downtime is simply not acceptable.

ubuntu 19.04 updates

Canonical introduced this feature in Ubuntu 18.04 LTS, but was later removed when Ubuntu 18.10 was released. The Livepatch feature is disabled on my Ubuntu 19.04 installation though, with a "Livepatch is not available for this system" warning. Not exactly sure what that means. Will update.

7. Ubuntu 19.04 Release Schedule

The beta freeze is scheduled to happen on March 28th and final release on April 18th.

ubuntu 19.04 what's new

Normally, post the beta release, it is a safe to install Ubuntu 19.04 for normal everyday use in my opinion, but ONLY if you are inclined to give it a spin before everyone else of course. I'd never recommend a pre-release OS on production machines. Ubuntu 19.04 Daily Build Download.


My biggest disappointment though is the supposed Ubuntu Software Center revamp which is now confirmed to not make it to this release. Subscribe us on Twitter and Facebook for more Ubuntu 19.04 release updates.

ubuntu 19.04 disco dingo

Recommended read: Top things to do after installing Ubuntu 19.04

Purism: A Linux OS is talking Convergence again [Tech Drive-in]

The hype around "convergence" just won't die it seems. We have heard it from Ubuntu a lot, KDE, even from Google and Apple in fact. But the dream of true convergence, a uniform OS experience across platforms, never really materialised. Even behemoths like Apple and Googled failed to pull it off with their Android/iOS duopoly. Purism's Debian based PureOS wants to change all that for good.

pure os linux

Purism, PureOS, and the future of Convergence

Purism, a computer technology company based out of California, shot to fame for its Librem series of privacy and security focused laptops and smartphones. Purism raised over half a million dollars through a Crowd Supply crowdfunding campaign for its laptop hardware back in 2015. And unlike many crowdfunding megahits which later turned out to be duds, Purism delivered on its promises big time.


Later in 2017, Purism surprised everyone again with their successful crowdfunding campaign for its Linux based opensource smartphone, dubbed Librem 5. The campaign raised over $2.6 million and surpassed its 1.5 million crowdfunding goal in just in two weeks. Purism's Librem 5 smartphones will start shipping late 2019.

Librem, which loosely refers to free and opensource software, was the brand name chosen by Purism for its laptops/smartphones. One of the biggest USPs of Purism devices is the hardware kill switches that it comes loaded with, which physically disconnects phone's camera, WiFi, Bluetooth, and mobile broadband modem.

Meet PureOS, Purism's Debian Based Linux OS

PureOS is a free and opensource, Debian based Linux distribution which runs on all Librem hardware including its smartphones. PureOS is endorsed by Free Software Foundation. 

purism os linux

The term convergence in computer speak, refers to applications that can work seamlessly across platforms, and bring a consistent look and feel and similar functionality on your smartphone and your computer. 
"Purism is beating the duopoly to that dream, with PureOS: we are now announcing that Purism’s PureOS is convergent, and has laid the foundation for all future applications to run on both the Librem 5 phone and Librem laptops, from the same PureOS release", announced Jeremiah Foster, the PureOS director at Purism (by duopoly, he was referring to Android/iOS platforms that dominate smartphone OS ecosystem).
Ideally, convergence should be able to help app developers and users all at the same time. App developers should be able to write their app once, testing it once and running it everywhere. And users should be able to seamlessly use, connect and sync apps across devices and platforms.

Easier said than done though. As Jeremiah Foster himself explains:
"it turns out that this is really hard to do unless you have complete control of software source code and access to hardware itself. Even then, there is a catch; you need to compile software for both the phone’s CPU and the laptop CPU which are usually different architectures. This is a complex process that often reveals assumptions made in software development but it shows that to build a truly convergent device you need to design for convergence from the beginning."

How PureOS is achieving convergence?

PureOS have had a distinct advantage when it comes to convergence. Purism is a hardware maker that also designs its platforms and software. From its inception, Purism has been working on a "universal operating system" that can run on different CPU architectures.

librem opensource phone

"By basing PureOS on a solid, foundational operating system – one that has been solving this performance and run-everywhere problem for years – means there is a large set of packaged software that 'just works' on many different types of CPUs."

The second big factor is "adaptive design", software apps that can adapt for desktop or mobile easily, just like a modern website with responsive deisgn.


"Purism is hard at work on creating adaptive GNOME apps – and the community is joining this effort as well – apps that look great, and work great, both on a phone and on a laptop".

Purism has also developed an adaptive presentation library for GTK+ and GNOME, called libhandy, which the third party app developers can use to contribute to Purism's convergence ecosystem. Still under active development, libhandy is already packaged into PureOS and Debian.

Komorebi Wallpapers display Live Time & Date, Stunning Parallax Effect on Ubuntu [Tech Drive-in]

Live wallpapers are not a new thing. In fact we have had a lot of live wallpapers to choose from on Linux 10 years ago. Today? Not so much. In fact, be it GNOME or KDE, most desktops today are far less customizable than it used to be. Komorebi wallpaper manager for Ubuntu is kind of a way back machine in that sense.

ubuntu live wallpaper

Install Gorgeous Live Wallpapers in Ubuntu 18.10/18.04 using Komorebi

Komorebi Wallpaper Manager comes with a pretty neat collection of live wallpapers and even video wallpapers. The package also contains a simple tool to create your own live wallpapers.


Komorebi comes packaged in a convenient 64-bit DEB package, making it super easy to install in Ubuntu and most Debian based distros (latest version dropped 32-bit support though).  
ubuntu 18.10 live wallpaper

That's it! Komorebi is installed and ready to go! Now launch Komorebi from app launcher.

ubuntu komorebi live wallpaper

And finally, to uninstall Komorebi and revert all the changes you made, do this in Terminal (CTRL+ALT+T).

sudo apt remove komorebi

Komorebi works great on Ubuntu 18.10, and 18.04 LTS. A few more screenshots.

komorebi live wallpaper ubuntu

As you can see, live wallpapers obviously consume more resources than a regular wallpaper, especially when you switch on Komorebi's fancy video wallpapers. But it is definitely not a resource hog as I feared it would be.

ubuntu wallpaper live time and date

Like what you see here? Go ahead and give Komorebi Wallpaper Manager a spin. Does it turn out to be not as resource-friendly in your PC? Let us know your opinion in the comments. 

ubuntu live wallpapers

A video wallpaper example. To see them in action, watch this demo.

Snap Install Mario Platformer on Ubuntu 18.10, Ubuntu 18.04 LTS [Tech Drive-in]

Nintendo's Mario needs no introduction. This game defined our childhoods. Now you can install and have fun with an unofficial version of the famed Mario platformer in Ubuntu 18.10 via this Snap package.

install Mario on Ubuntu

Play Nintendo's Mario Unofficially on Ubuntu 18.10

"Mari0 is a Mario + Portal platformer game." It is not an official release and hence the slight name change (Mari0 instead of Mario). Mari0 is still in testing, and might not work as intended. It doesn't work fullscreen for example, but everything else seems to be working great in my PC.

But please be aware that this app is still in testing, and a lot of things can go wrong. Mari0 also comes with joystick support. Here's how you install unofficial Mari0 snap package. Do this in Terminal (CTRL+ALT+T)

sudo snap install mari0

To enable joystick support:

sudo snap connect mari0:joystick

nintendo mario ubuntu

Please find time to provide valuable feedback to the developer post testing, especially if something went wrong. You can also leave your feedback in the comments below.

Florida based Startup Builds Ubuntu Powered Aerial Robotics [Tech Drive-in]

Apellix is a Florida based startup that specialises in aerial robotics. They intend to create safer work environments by replacing workers with its task-specific drones to complete high-risk jobs at dangerous/elevated work sites.

ubuntu robotics

Robotics with an Ubuntu Twist

Ubuntu is expanding its reach into robotics and IoT in a big way. A few years ago at the TechCrunch Disrupt event, UAVIA unveiled a new generation of its one hundred percent remotely operable drones (an industry first, they claimed), which were built with Ubuntu under the hood. Then there were other like Erle Robotics (recently renamed to Acutronic Robotics) which made big strides in drone technology using Ubuntu at its core.


Apellix is the only aerial robotics company with drones "capable of making contact with structures through fully computer-controlled flight", claims Robert Dahlstrom, Founder and CEO of Apellix.

"At height, a human pilot cannot accurately gauge distance. At 45m off the ground, they can’t tell if they are 8cm or 80cm away from the structure. With our solutions, an engineer simply positions the drone near the inspection site, then the on-board computer takes over and automates the delicate docking process." He adds.


Apellix considered many popular Linux distributions before zeroing in on Ubuntu for its stability, reliability, and large developer ecosystem. Ubuntu's versatility also enabled Apellix to use the same underlying OS platform and software packages across development and production.

The team is currently developing on Ubuntu Server with the intent to migrate to Ubuntu Core. The company is also making extensive use of Ubuntu Server, both on-board its robotic systems and its cloud operations, according to a case study by Ubuntu's parent company, Canonical Foundation. 

apellix ubuntu drones

"With our aircraft, an error of 2.5 cm could be the difference between a successful flight and a crash," comments Dahlstrom. "Software is core to avoiding those errors and allowing us to do what we do - so we knew that placing the right OS at the heart of our solutions was essential." 

Openpilot: An Opensource Alternative to Tesla Autopilot, GM Super Cruise [Tech Drive-in]

Openpilot is an opensource driving agent which at the moment can perform industry-standard functions such as Adaptive Cruise Control and Lane Keeping Assist System for a select few auto manufacturers.


opensource autopilot system

Meet Project Openpilot

Opensource isn't a misnomer in the world of autonomous cars. Even as far back as in 2013, Ubuntu was spotted in Mercedes-Benz driverless cars, and it is also a well-known fact that Google is using a 'lightly customized Ubuntu' at the core of its push towards building fully autonomous cars. 

Openpilot though is unique in its own way. It's an opensource driving agent that already works (as is claimed) in a number of models from manufacturers such as Toyota, Kia, Honda, Chevrolet, Hyundai, Jeep, etc.


Above image: An Openpilot user getting a distracted alert. Apart from Adaptive Cruise Control (ACC) and Lane Keeping Assist System functions, Openpilot developers claims that their technology currently is "about on par with Tesla Autopilot and GM Super Cruise, and better than all other manufacturers."

If Tesla's Autopilot was iOS, Openpilot developers would like their product to become the "Android for cars", the ubiquitous software of choice when autonomous systems on cars goes universal.



The Openpilot-endorsed, officially supported list of cars keeps growing. It now includes some 40 odd models from manufacturers ranging from Toyota to Hyundai. And they are actively testing Openpilot on newer cars from VW, Subaru etc. according to their Twitter feed.

Even a lower variant of Tesla Model S which came without Tesla Autopilot system was upgraded with comma.ai's Openpilot solution which then mimicked a number of features from Tesla Autopilot, including automatic steering in highways according to this article. (comma.ai is the startup behind Openpilot)

Related read: Udacity's attempts to build a fully opensource self-driving car, and Linux Foundation's Automotive Grade Linux (AGL) infotainment system project which Toyota intends to use in its future cars.

Oranchelo - The icon theme to beat on Ubuntu 18.10 [Tech Drive-in]

OK, that might be an overstatement. But Oranchelo is good, really good.


Oranchelo Icons Theme for Ubuntu 18.10

Oranchelo is a flat-design icon theme originally designed for XFCE4 desktop. Though it works great on GNOME as well. I especially like the distinct take on Firefox and Chromium icons, as you can see in the screenshot.



Here's how you install Oranchelo icons theme on Ubuntu 18.10 using Oranchelo PPA. Just copy-paste the following three commands to Terminal (CTRL+ALT+T).

sudo add-apt-repository ppa:oranchelo/oranchelo-icon-theme
sudo apt update
sudo apt install oranchelo-icon-theme

Now run GNOME Tweaks, Appearance > Icons > Oranchelo.


Meet the artist behind Oranchelo icons theme at his deviantart page. So, how do you like the new icons? Let us know your opinion in the comments below.


11 Things I did After Installing Ubuntu 18.10 Cosmic Cuttlefish [Tech Drive-in]

Have been using "Cosmic Cuttlefish" since its first beta. It is perhaps one of the most visually pleasing Ubuntu releases ever. But more on that later. Now let's discuss what can be done to improve the overall user-experience by diving deep into the nitty gritties of Canonical's brand new flagship OS.

1. Enable MP3/MP4/AVI Playback, Adobe Flash etc.

This has been perhaps the standard 'first-thing-to-do' ever since the Ubuntu age dawned on us. You do have an option to install most of the 'restricted-extras' while installing the OS itself now, but if you are not-sure you've ticked all the right boxes, just run the following command in Terminal.

sudo apt install ubuntu-restricted-extras

OR

You can install it straight from the Ubuntu Software Center by CLICKING HERE.

2. Get GNOME Tweaks

GNOME Tweaks is non-negotiable.

things to do after installing ubuntu 18.10

GNOME Tweaks is an app the lets you tweak little things in GNOME based OSes that are otherwise hidden behind menus. If you are on Ubuntu 18.10, Tweaks is a must. Honestly, I don't remember if it was installed as a default. But here you install it anyway, Apt-URL will prompt you if the app already exists.


Search for Gnome Tweaks in Ubuntu Software Center. OR simply CLICK HERE to go straight to the app in Software Center. OR even better, copy-paste this command in Terminal (keyboard shortcut: CTRL+ALT+T).

sudo apt install gnome-tweaks

3. Displaying Date/Battery Percentage on Top Panel  

The screenshot, I hope, is self explanatory.

things to do after installing ubuntu 18.10

If you have GNOME Tweaks installed, this is easily done. Open GNOME tweaks, goto 'Top Bar' sidemenu and enable/disable what you need.

4. Enable 'Click to Minimize' on Ubuntu Dock

Honestly, I don't have a clue why this is disabled by default. You intuitively expect the apps shortcuts on Ubuntu dock to 'minimize' when you click on it (at least I do).

In fact, the feature is already there, all you need to do is to switch it ON. Do this is Terminal.

gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize'

That's it. Now if you didn't find the 'click to minimize' feature useful, you can always revert Dock settings back to its original state, by copy-pasting the following command in Terminal app.

gsettings reset org.gnome.shell.extensions.dash-to-dock click-action

5. Pin/Unpin Useful Stuff from Launcher

There are a bunch of apps that are pinned to your Ubuntu launcher by default.

things to do after ubuntu 18.10
 
For example, I almost never use the 'Help' app or the 'Amazon' shortcut preloaded on launcher. But I would prefer a shortcut to Terminal app instead. Right-click on your preferred app on the launcher, and add-to/remove-from favorites as you please.

6. Enable/Disable Two Finger Scrolling

As you must've noticed, two-finger scrolling is a system default now. 

things to do after installing ubuntu cosmic
 
One of my laptops act strangely when two-finger scrolling is on. You can easily disable two-finger scrolling and enable old school edge-scrolling in 'Settings'.  Settings > Mouse and Touchpad

Quicktip: You can go straight to submenus by simply searching for it in GNOME's universal search bar.

ubuntu 18.10 cosmic

Take for example the screenshot above, where I triggered the GNOME menu by hitting Super(Windows) key, and simply searched for 'mouse' settings. The first result will take me directly to the 'Settings' submenu for 'Mouse and Touchpad' that we saw earlier. Easy right? More examples will follow.

7. Nightlight Mode ON

When you're glued to your laptop/PC screen for a large amount of time everyday, it is advisable that you enable the automatic nightlight mode for the sake of your eyes. Be it the laptop or my phone, this has become an essential feature. The sight of a LED display without nightlight ON during lowlight conditions immediately gives me a headache these days. Easily one of my favourite in-built features on GNOME.


Settings > Devices > Display > Night Light ON/OFF

things to do after installing ubuntu 18.10

OR as before, Hit superkey > search for 'night light'. It will take you straight to the submenu under Devices > Display. Guess you wouldn't need anymore examples on that.

things to do after installing ubuntu 18.10

8. Safe Eyes App for Ubuntu

A popup that will fill the entire screen and forces you to take your eyes off it.

apps for ubuntu 18.10

Apart from enabling the nighlight mode, Safe Eyes is another app I strongly recommend to those who stare at their laptops for long periods of time. This nifty little app forces you to take your eyes off the computer screen and do some standard eye-exercises at regular intervals (which you can change).

things to do after installing ubuntu 18.10

Installation is pretty straight forward. Just these 3 commands on your Terminal.

sudo add-apt-repository ppa:slgobinath/safeeyes
sudo apt update
sudo apt install safeeyes

9. Privacy on Ubuntu 18.10

Guess I don't need to lecture you on the importance of privacy in the post-PRISM era.

ubuntu 18.10 privacy

Ubuntu remembers your usage & history to recommend you frequently used apps and such. And this is never shared over the network. But if you're not comfortable with this, you can always disable and delete your usage history on Ubuntu. Settings > Privacy > Usage & History 

10. Perhaps a New Look & Feel?

As you might have noticed, I'm not using the default Ubuntu theme here.

themes ubuntu 18.10

Right now I'm using System 76's Pop OS GTK theme and icon sets. They look pretty neat I think. Just three commands to install it in your Ubuntu 18.10.

sudo add-apt-repository ppa:system76/pop
sudo apt-get update
sudo apt install pop-icon-theme pop-gtk-theme pop-gnome-shell-theme
sudo apt install pop-wallpapers

Execute last command if you want Pop OS wallpapers as well. To enable the newly installed theme and icon sets, launch GNOME Tweaks > Appearance (see screenshot). I will be making separate posts on themes, icon sets and GNOME shell extensions. So stay subscribed. 

11. Disable Error Reporting

If you find the "application closed unexpectedly" popups annoying, and would like to disable error reporting altogether, this is what you need to do.

sudo gedit /etc/default/apport

This will open up a text editor window which has only one entry: "enabled=1". Change the value to '0' (zero) and you have Apport error reporting completely disabled.


Follow us on Facebook, and Twitter

RIOT OS: A tiny Opensource OS for the 'Internet of Things' (IoT) [Tech Drive-in]

"RIOT powers the Internet of Things like Linux powers the Internet." RIOT is a small, free and opensource operating system for the memory constrained, low power wireless IoT devices.


RIOT OS: A tiny OS for embedded systems

Initially developed by Freie Universität Berlin (FU Berlin), INRIA institute and HAW Hamburg, Riot OS has evolved over the years into a very competent alternative to TinyOS, Contiki etc. and now supports application programming with programming languages such as C and C++, and provides full multithreading and real-time capabilities. RIOT can run on 8-bit, 16-bit and 32-bit ARM Cortex processors.


RIOT is opensource, has its source code published on GitHub, and is based on a microkernel architecture (the bare minimum software required to implement an operating system). RIOT OS vs competition:

riot os for IoT

More information on RIOT OS can be found here. RIOT summits are held annually in major cities of Europe, if you are interested pin this up. Thank you for reading.

IBM, the 6th biggest contributor to Linux Kernel, acquires RedHat for $34 Billion [Tech Drive-in]

The $34 billion all cash deal to purchase opensource pioneer Red Hat is IBM's biggest ever acquisition by far. The deal will give IBM a major foothold in fast-growing cloud computing market and the combined entity could give stiff competition to Amazon's cloud computing platform, AWS. But what about Red Hat and its future?

ibm-redhat

Another Oracle - Sun Micorsystems deal in the making? 
The alarmists among us might be quick to compare the IBM - Red Hat deal with the decade old deal between Oracle Corporation and Sun Microsystems, which was then a major player in opensource software scene.

But fear not. Unlike Oracle (which killed off Sun's OpenSolaris OS almost immediately after acquisition and even started a patent war against Android using Sun's Java patents), IBM is already a major contributor to opensource software including the mighty Linux Kernel. In fact, IBM was the 6th biggest contributor to Linux kernel in 2017.

What's in it for IBM?
With the acquisition of Red Hat, IBM becomes the world's #1 hybrid cloud provider, "offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses", according to Ginni Rometty, IBM Chairman, President and CEO. She adds:

“Most companies today are only 20 percent along their cloud journey, renting compute power to cut costs. The next 80 percent is about unlocking real business value and driving growth. This is the next chapter of the cloud. It requires shifting business applications to hybrid cloud, extracting more data and optimizing every part of the business, from supply chains to sales.”

The Future of Red Hat
The Red Hat story is almost as old as Linux itself. Founded in 1993, RedHat's growth was phenomenal. Over the next two decades Red Hat went on to establish itself as the premier Linux company, and Red Hat OS was the enterprise Linux operating system of choice. It set the benchmark for others like Ubuntu, openSUSE and CentOS to follow. Red Hat is currently the second largest corporate contributor to the Linux kernel after Intel (Intel really stepped-up its Linux Kernel contributions post-2013).

Regular users might be more familiar with Fedora Project, a more user-friendly operating system maintained by Red Hat that competes with mainstream, non-enterprise operating systems like Ubuntu, elementary OS, Linux Mint or even Windows 10 for that matter. Will Red Hat be able to stay independent post acquisition?

According to the official press release, "IBM will remain committed to Red Hat’s open governance, open source contributions, participation in the open source community and development model, and fostering its widespread developer ecosystem. In addition, IBM and Red Hat will remain committed to the continued freedom of open source, via such efforts as Patent Promise, GPL Cooperation Commitment, the Open Invention Network and the LOT Network." Well, that's a huge relief.

In fact, IBM and Red Hat has been partnering each other for over 20 years, with IBM serving as an early supporter of Linux, collaborating with Red Hat to help develop and grow enterprise-grade Linux. And as IBM CEO mentioned, the acquisition is more of an evolution of the long-standing partnership between the two companies.
"Open source is the default choice for modern IT solutions, and I’m incredibly proud of the role Red Hat has played in making that a reality in the enterprise,” said Jim Whitehurst, President and CEO, Red Hat. “Joining forces with IBM will provide us with a greater level of scale, resources and capabilities to accelerate the impact of open source as the basis for digital transformation and bring Red Hat to an even wider audience – all while preserving our unique culture and unwavering commitment to open source innovation."
Predicting the future can be tricky. A lot of things can go wrong. But one thing is sure, the acquisition of Red Hat by IBM is nothing like the Oracle - Sun deal. Between them, IBM and Red Hat must have contributed more to the open source community than any other organization.

How to Upgrade from Ubuntu 18.04 LTS to 18.10 'Cosmic Cuttlefish' [Tech Drive-in]

One day left before the final release of Ubuntu 18.10 codenamed "Cosmic Cuttlefish". This is how you make the upgrade from Ubuntu 18.04 to 18.10.

Upgrade to Ubuntu 18.10 from 18.04

Ubuntu 18.10 has a brand new look!
As you can see from the screenshot, a lot has changed. Ubuntu 18.10 arrives with a major theme overhaul. After almost a decade, the default Ubuntu GTK theme ("Ambiance") is being replaced with a brand new one called "Yaru". The new theme is based heavily on GNOME's default "Adwaita" GTK theme. More on that later.

Upgrade from Ubuntu 18.04 LTS to 18.10
If you're on Ubuntu 18.04 LTS, upgrading to 18.10 "cosmic" is a pretty straight forward affair. Since 18.04 is a long-term support (LTS) release (meaning the OS will get official updates for about 5 years), it may not prompt you with an upgrade option when 18.10 finally arrives. 

So here's how it's done. Disclaimer: back up your critical data before going forward. And better don't try this on mission critical machines. You're on LTS anyway.
  • An up-to-date Ubuntu 18.04 LTS is the first step. Do the following in Terminal.
$ sudo apt update && sudo apt dist-upgrade
$ sudo apt autoremove
  • The first command will check for updates and then proceed with upgrading your Ubuntu 18.04 LTS with the latest updates. The "autoremove" command will clean up any and all dependencies that were installed with applications, and are no longer required.
  • Now the slightly tricky part. You need to edit the /etc/update-manager/release-upgrades file and change the Prompt=never entry to Prompt=normal  or else it will give a "no release found" error message. 
  • I used Vim to make the edit. But for the sake of simplicity, let's use gedit. 
$ sudo gedit /etc/update-manager/release-upgrades
  • Make the edit and save the changes. Now you are ready to go ahead with the upgrade. Make sure your laptop is plugged-in, this will take time. 
  • To be on the safer side, please make sure that there's at least 5GB of disk space left in your home partition (it will prompt you and exit if you don't have enough space required for the upgrade). 
$ sudo do-release-upgrade -d
  • That's it. Wait for a few hours and let it do its magic. 
My upgrade to Ubuntu 18.10 was uneventful. Nothing broke and it all worked like a charm. After the upgrade is done, you're probably still stuck with your old theme. Fire up "Gnome Tweaks" app (get it from App Store if you already haven't), and change the theme and the icons to "Yaru". 

Meet 'Project Fusion': An Attempt to Integrate Tor into Firefox [Tech Drive-in]

A real private mode in Firefox? A Tor integrated Firefox could just be that. Tor Project is currently working with Mozilla to integrate Tor into Firefox.


Over the years, and more so since Cambridge Analytica scandal, Mozilla has taken a progressively tougher stance on user privacy. Firefox's Facebook Container extension, for example, makes it much harder for Facebook to  collect data from your browsing activities (yep, that's a thing. Facebook is tracking your every move on the web). The extension now includes Facebook Messenger and Instagram as well.

Firefox with Tor Integration

For starters, Tor is a free software and an open network for anonymous communication over the web. "Tor protects you by bouncing your communications around a distributed network of relays run by volunteers all around the world: it prevents somebody watching your Internet connection from learning what sites you visit, and it prevents the sites you visit from learning your physical location."

And don't confuse this project with Tor Browser, which is web browser with Tor's elements built on top of Firefox stable builds. Tor Browser in its current form has many limitations. Since it is based on Firefox ESR, it takes a lot of time and effort to rebase the browser with new features from Firefox's stable builds every year or so.

Enter 'Project Fusion'

Now that Mozilla has officially taken over the works of integrating Tor into Firefox through Project Fusion, things could change for the better. With the intention of creating a 'super-private' mode in Firefox that supports First Party Isolation (which prevents cookies from tracking you across domains), Fingerprinting Resistance (which blocks user tracking through canvas elements), and Tor proxy, 'Project Fusion' is aiming big. To put it together, the goals of 'Project Fusion' can be condescend into four points.
  • Implementing fingerprinting resistance, make more user friendly and reduce web breakage.
  • Implement proxy bypass framework.
  • Figure out the best way to integrate Tor proxy into Firefox.
  • Real private browsing mode in Firefox, with First Party Isolation, Fingerprinting Resistance, and Tor proxy.
As good as it sounds, Project Fusion could still be years away or may not happen at all given the complexity of the work. According to a Tor Project Developer at Mozilla:
"Our ultimate goal is a long way away because of the amount of work to do and the necessity to match the safety of Tor Browser in Firefox when providing a Tor mode. There's no guarantee this will happen, but I hope it will and we will keep working towards it."
As If you want to help, Firefox bugs tagged 'fingerprinting' in the whiteboard are a good place to start. Further reading at TOR 'Project Fusion' page.

City of Bern Awards Switzerland's Largest Open Source Contract for its Schools [Tech Drive-in]

In another major win in a span of weeks for the proponents of open source solutions in EU, Bern, the capital of Switzerland, is pushing ahead with its plans to adopt open source tools as its software of choice for all its public schools. If all goes well, some 10,000 students in Switzerland schools could soon start getting their training using an IT infrastructure that is largely open source.

Switzerland's Largest Open Source deal

Over 10,000 Students to Benefit

Switzerland's largest open-source deal introduces a brand new IT infrastructure for the public schools of its capital city. The package includes Colabora Cloud Office, an online version of LibreOffice which is to be hosted in the City of Bern's data center, as its core component. Nextcloud, Kolab, Moodle, and Mahara are the other prominent open source tools included in the package. The contract is worth CHF 13.7 million over 6 years.

In an interview given to 'Der Bund', one of Switzerland's oldest news publications, open-source advocate Matthias Stürmer, EPP city council and IT expert, told that this is probably the largest ever open-source deal in Switzerland.

Many European countries are clamoring to adopt open source solutions for their cities and schools. From the recent German Federal Information Technology Centre's (ITZBund) selection of Nexcloud as their cloud solutions partner, to city of Turin's adoption of Ubuntu, to Italian Military's LibreOffice migration, Europe's recognition of open source solutions as a legitimate alternative is gaining ground.

Ironically enough, most of these software will run on proprietary iOS platform, as the clients given to students will be all Apple iPads. But hey, it had to start somewhere. When Europe's richest countries adopt open source, others will surely take notice. Stay tuned for updates. [via inside-channels.ch]

Germany says No to Public Cloud, Chooses Nextcloud's Open Source Solution [Tech Drive-in]

Germany's Federal Information Technology Centre (ITZBund) opts for an on-premise cloud solution which unlike those fancy Public cloud solutions, is completely private and under its direct control.

Germany's Open Source Migration

Given the recent privacy mishaps at some of biggest public cloud solution providers on the planet, it is only natural that government agencies across the world are opting for solutions that could provide users with more privacy and security. If the recent Facebook - Cambridge Analytica debacle is any indication, data vulnerability has become a serious national security concern for all countries. 

In light of these developments, government of Germany's IT service provider, ITZBund, has chosen Nextcloud as their cloud solutions partner. Nextcloud is a free and open source cloud solutions company based out of Europe that lets you to install and run its software on your private server. ITZBund has been running a pilot since 2016 with some 5000 users on Nextcloud's platform.
"Nextcloud is pleased to announce that the German Federal Information Technology Center (ITZBund) has chosen Nextcloud as their solution for efficient and secure file sharing and collaboration in a public tender. Nextcloud is operated by the ITZBund, the central IT service provider of the federal government, and made available to around 300,000 users. ITZBund uses a Nextcloud Enterprise Subscription to gain access to operational, scaling and security expertise of Nextcloud GmbH as well as long-term support of the software."
ITZBund employs about 2,700 people that include IT specialists, engineers and network and security professionals. After the successful completion of the pilot, a public tender was floated by ITZBund which eventually selected Nextcloud as their preferred partner. Nextcloud scored high on security requirements and scalability, which it addressed through its unique Apps concept.

LG Makes its webOS Operating System Open Source, Again! [Tech Drive-in]

Not many might remember HP's capable webOS. The open source webOS operating system was HP's answer to Android and iOS platforms. It was slick and very user-friendly from the start, some even considered it a better alternative to Android for Tablets at the time. But like many other smaller players, HP's webOS just couldn't find enough takers, and the project was abruptly ended and sold off of to LG.


The Open Source LG webOS

Under the 2013 agreement with HP Inc., LG Electronics had unlimited access to all webOS related documentation and source code. When LG took the project underground, webOS was still an open-source project.

After many years of development, webOS is now LG's platform of choice for its Smart TV division. It is generally considered as one of the better sorted Smart TV user interfaces. LG is now ready to take the platform beyond Smart TVs. LG has developed an open source version of its platform, called webOS Open Source Edition, now available to the public at webosose.org.

Dr. I.P. Park, CTO at LG Electronics had this to say, "webOS has come a long way since then and is now a mature and stable platform ready to move beyond TVs to join the very exclusive group of operating systems that have been successfully commercialization at such a mass level. As we move from an app-based environment to a web-based one, we believe the true potential of webOS has yet to be seen."

By open sourcing webOS, it looks like LG is gunning for Samsung's Tizen OS, which is also open source and built on top of Linux. In our opinion, device manufacturers preferring open platforms (like Automotive Grade Linux), over Android or iOS is a welcome development for the long-term health of the industry in general.

04-11-2021

15:06

Announcement of LibreOffice 7.1.7 Community [Press Releases Archives - The Document Foundation Blog]

Berlin, November 4, 2021 – LibreOffice 7.1.7 Community, the seventh minor release of the LibreOffice 7.1 family, targeted to desktop productivity, is available for download from https://www.libreoffice.org/download/.

End user support is provided by volunteers via email and online resources: https://www.libreoffice.org/get-help/community-support/. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help us to make all of these resources available.

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners, with long-term support options, professional assistance, custom features and Service Level Agreements: https://www.libreoffice.org/download/libreoffice-in-business/.

LibreOffice 7.1.7 change log pages are available on TDF’s wiki: https://wiki.documentfoundation.org/Releases/7.1.7/RC1 (changed in RC1) and https://wiki.documentfoundation.org/Releases/7.1.7/RC2 (changed in RC2).

LibreOffice Community and the LibreOffice Enterprise family of products are based on the LibreOffice Technology platform, the result of years of development efforts with the objective of providing a state of the art office suite not only for the desktop but also for mobile and the cloud.

LibreOffice Technology based products for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/, while for App Stores and ChromeOS are listed here: https://www.libreoffice.org/download/libreoffice-from-microsoft-and-mac-app-stores/

LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools at https://www.libreoffice.org/donate.

LibreOffice 7.1.7 is built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org.

15-10-2021

10:26

The Document Foundation announces LibreOffice 7.2.2 Community [Press Releases Archives - The Document Foundation Blog]

Berlin, October 14, 2021 – The Document Foundation announces LibreOffice 7.2.2 Community, the second minor release of the LibreOffice 7.2 family targeted at technology enthusiasts and power users, which is available for download from https://www.libreoffice.org/download/. This version includes 68 bug fixes and improvements to document compatibility.

LibreOffice 7.2.2 Community is also available for Apple Silicon from this link: https://download.documentfoundation.org/libreoffice/stable/7.2.2/mac/aarch64/.

For enterprise-class deployments, TDF strongly recommends the LibreOffice Enterprise family of applications from ecosystem partners, with long-term support options, professional assistance, custom features and Service Level Agreements: https://www.libreoffice.org/download/libreoffice-in-business/.

LibreOffice Community and the LibreOffice Enterprise family of products are based on the LibreOffice Technology platform, the result of years of development efforts with the objective of providing a state of the art office suite not only for the desktop but also for mobile and the cloud.

Availability of LibreOffice 7.2.2 Community

LibreOffice 7.2.2 Community represents the bleeding edge in term of features for open source office suites. For users whose main objective is personal productivity and therefore prefer a release that has undergone more testing and bug fixing over the new features, The Document Foundation provides LibreOffice 7.1.6.

LibreOffice 7.2.2 change log pages are available on TDF’s wiki: https://wiki.documentfoundation.org/Releases/7.2.2/RC1 (changed in RC1) and https://wiki.documentfoundation.org/Releases/7.2.2/RC2 (changed in RC2).

LibreOffice Technology based products for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/, while for App Stores and ChromeOS are listed here: https://www.libreoffice.org/download/libreoffice-from-microsoft-and-mac-app-stores/

LibreOffice individual users are assisted by a global community of volunteers: https://www.libreoffice.org/get-help/community-support/. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help us to make all of these resources available.

LibreOffice users are invited to join the community at https://ask.libreoffice.org, where they can get and provide user-to-user support. People willing to contribute their time and professional skills to the project can visit the dedicated website at https://whatcanidoforlibreoffice.org.

LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools at https://www.libreoffice.org/donate.

LibreOffice 7.2.2 is built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org.

30-08-2021

11:12

Django Authentication Video Tutorial [Simple is Better Than Complex]

Updated at Nov 8, 2018: New video added to the series: How to integrate Django forms with Bootstrap 4.

In this tutorial series, we are going to explore Django’s authentication system by implementing sign up, login, logout, password change, password reset and protected views from non-authenticated users. This tutorial is organized in 8 videos, one for each topic, ranging from 4 min to 15 min each.


Setup

Starting a Django project from scratch, creating a virtual environment and an initial Django app. After that, we are going to setup the templates and create an initial view to start working on the authentication.

If you are already familiar with Django, you can skip this video and jump to the Sign Up tutorial below.


Sign Up

First thing we are going to do is implement a sign up view using the built-in UserCreationForm. In this video you are also going to get some insights on basic Django form processing.


Login

In this video tutorial we are going to first include the built-in Django auth URLs to our project and proceed to implement the login view.


Logout

In this tutorial we are going to include Django logout and also start playing with conditional templates, displaying different content depending if the user is authenticated or not.


Password Change

Next The password change is a view where an authenticated user can change their password.


Password Reset

This tutorial is perhaps the most complicated one, because it involves several views and also sending emails. In this video tutorial you are going to learn how to use the default implementation of the password reset process and how to change the email messages.


Protecting Views

After implementing the whole authentication system, this video gives you an overview on how to protect some views from non authenticated users by using the @login_required decorator and also using class-based views mixins.


Bootstrap 4 Forms

Extra video showing how to integrate Django with Bootstrap 4 and how to use Django Crispy Forms to render Bootstrap forms properly. This video also include some general advices and tips about using Bootstrap 4.


Conclusions

If you want to learn more about Django authentication and some extra stuff related to it, like how to use Bootstrap to make your auth forms look good, or how to write unit tests for your auth-related views, you can read the forth part of my beginners guide to Django: A Complete Beginner’s Guide to Django - Part 4 - Authentication.

Of course the official documentation is the best source of information: Using the Django authentication system

The code used in this tutorial: github.com/sibtc/django-auth-tutorial-example

This was my first time recording this kind of content, so your feedback is highly appreciated. Please let me know what you think!

And don’t forget to subscribe to my YouTube channel! I will post exclusive Django tutorials there. So stay tuned! :-)

09-07-2021

20:56

What You Should Know About The Django User Model [Simple is Better Than Complex]

The goal of this article is to discuss the caveats of the default Django user model implementation and also to give you some advice on how to address them. It is important to know the limitations of the current implementation so to avoid the most common pitfalls.

Something to keep in mind is that the Django user model is heavily based on its initial implementation that is at least 16 years old. Because user and authentication is a core part of the majority of the web applications using Django, most of its quirks persisted on the subsequent releases so to maintain backward compatibility.

The good news is that Django offers many ways to override and customize its default implementation so to fit your application needs. But some of those changes must be done right at the beginning of the project, otherwise it would be too much of a hassle to change the database structure after your application is in production.

Below, the topics that we are going to cover in this article:


User Model Limitations

First, let’s explore the caveats and next we discuss the options.

The username field is case-sensitive

Even though the username field is marked as unique, by default it is not case-sensitive. That means the username john.doe and John.doe identifies two different users in your application.

This can be a security issue if your application has social aspects that builds around the username providing a public URL to a profile like Twitter, Instagram or GitHub for example.

It also delivers a poor user experience because people doesn’t expect that john.doe is a different username than John.Doe, and if the user does not type the username exactly in the same way when they created their account, they might be unable to log in to your application.

Possible Solutions:

  • If you are using PostgreSQL, you can replace the username CharField with the CICharField instead (which is case-insensitive)
  • You can override the method get_by_natural_key from the UserManager to query the database using iexact
  • Create a custom authentication backend based on the ModelBackend implementation

The username field validates against unicode letters

This is not necessarily an issue, but it is important for you to understand what that means and what are the effects.

By default the username field accepts letters, numbers and the characters: @, ., +, -, and _.

The catch here is on which letters it accepts.

For example, joão would be a valid username. Similarly, Джон or 約翰 would also be a valid username.

Django ships with two username validators: ASCIIUsernameValidator and UnicodeUsernameValidator. If the intended behavior is to only accept letters from A-Z, you may want to switch the username validator to use ASCII letters only by using the ASCIIUsernameValidator.

Possible Solutions:

  • Replace the default user model and change the username validator to ASCIIUsernameValidator
  • If you can’t replace the default user model, you can change the validator on the form you use to create/update the user

The email field is not unique

Multiple users can have the same email address associated with their account.

By default the email is used to recover a password. If there is more than one user with the same email address, the password reset will be initiated for all accounts and the user will receive an email for each active account.

It also may not be an issue but this will certainly make it impossible to offer the option to authenticate the user using the email address (like those sites that allow you to login with username or email address).

Possible Solutions:

  • Replace the default user model using the AbstractBaseUser to define the email field from scratch
  • If you can’t replace the user model, enforce the validation on the forms used to create/update

The email field is not mandatory

By default the email field does not allow null, however it allow blank values, so it pretty much allows users to not inform a email address.

Also, this may not be an issue for your application. But if you intend to allow users to log in with email it may be a good idea to enforce the registration of this field.

When using the built-in resources like user creation forms or when using model forms you need to pay attention to this detail if the desired behavior is to always have the user email.

Possible Solutions:

  • Replace the default user model using the AbstractBaseUser to define the email field from scratch
  • If you can’t replace the user model, enforce the validation on the forms used to create/update

A user without password cannot initiate a password reset

There is a small catch on the user creation process that if the set_password method is called passing None as a parameter, it will produce an unusable password. And that also means that the user will be unable to start a password reset to set the first password.

You can end up in that situation if you are using social networks like Facebook or Twitter to allow the user to create an account on your website.

Another way of ending up in this situation is simply by creating a user using the User.objects.create_user() or User.objects.create_superuser() without providing an initial password.

Possible Solutions:

  • If in you user creation flow you allow users to get started without setting a password, remember to pass a random (and lengthy) initial password so the user can later on go through the password reset flow and set an initial password.

Swapping the default user model is very difficult after you created the initial migrations

Changing the user model is something you want to do early on. After your database schema is generated and your database is populated it will be very tricky to swap the user model.

The reason why is that you are likely going to have some foreign key created referencing the user table, also Django internal tables will create hard references to the user table. And if you plan to change that later on you will need to change and migrate the database by yourself.

Possible Solutions:

  • Whenever you are starting a new Django project, always swap the default user model. Even if the default implementation fit all your needs. You can simply extend the AbstractUser and change a single configuration on the settings module. This will give you a tremendous freedom and it will make things way easier in the future should the requirements change.

Detailed Solutions

To address the limitations we discussed in this article we have two options: (1) implement workarounds to fix the behavior of the default user model; (2) replace the default user model altogether and fix the issues for good.

What is going to dictate what approach you need to use is in what stage your project currently is.

  • If you have an existing project running in production that is using the default django.contrib.auth.models.User, go with the first solution implementing the workarounds;
  • If you are just starting your Django, start with the right foot and go with the solution number 2.

Workarounds

First let’s have a look on a few workarounds that you can implement if you project is already in production. Keep in mind that those solutions assume that you don’t have direct access to the User model, that is, you are currently using the default User model importing it from django.contrib.auth.models.

If you did replace the User model, then jump to the next section to get better tips on how to fix the issues.

Making username field case-insensitive

Before making any changes you need to make sure you don’t have conflicting usernames on your database. For example, if you have a User with the username maria and another with the username Maria you have to plan a data migration first. It is difficult to tell you what to do because it really depends on how you want to handle it. One option is to append some digits after the username, but that can disturb the user experience.

Now let’s say you checked your database and there are no conflicting usernames and you are good to go.

First thing you need to do is to protect your sign up forms to not allow conflicting usernames to create accounts.

Then on your user creation form, used to sign up, you could validate the username like this:

def clean_username(self):
    username = self.cleaned_data.get("username")
    if User.objects.filter(username__iexact=username).exists():
        self.add_error("username", "A user with this username already exists.")
    return username

If you are handling user creation in a rest API using DRF, you can do something similar in your serializer:

def validate_username(self, value):
    if User.objects.filter(username__iexact=value).exists():
        raise serializers.ValidationError("A user with this username already exists.")
    return value

In the previous example the mentioned ValidationError is the one defined in the DRF.

The iexact notation on the queryset parameter will query the database ignoring the case.

Now that the user creation is sanitized we can proceed to define a custom authentication backend.

Create a module named backends.py anywhere in your project and add the following snippet:

backends.py

from django.contrib.auth import get_user_model
from django.contrib.auth.backends import ModelBackend


class CaseInsensitiveModelBackend(ModelBackend):
    def authenticate(self, request, username=None, password=None, **kwargs):
        UserModel = get_user_model()
        if username is None:
            username = kwargs.get(UserModel.USERNAME_FIELD)
        try:
            case_insensitive_username_field = '{}__iexact'.format(UserModel.USERNAME_FIELD)
            user = UserModel._default_manager.get(**{case_insensitive_username_field: username})
        except UserModel.DoesNotExist:
            # Run the default password hasher once to reduce the timing
            # difference between an existing and a non-existing user (#20760).
            UserModel().set_password(password)
        else:
            if user.check_password(password) and self.user_can_authenticate(user):
                return user

Now switch the authentication backend in the settings.py module:

settings.py

AUTHENTICATION_BACKENDS = ('mysite.core.backends.CaseInsensitiveModelBackend', )

Please note that 'mysite.core.backends.CaseInsensitiveModelBackend' must be changed to the valid path, where you created the backends.py module.

It is important to have handled all conflicting users before changing the authentication backend because otherwise it could raise a 500 exception MultipleObjectsReturned.

Fixing the username validation to use accept ASCII letters only

Here we can borrow the built-in UsernameField and customize it to append the ASCIIUsernameValidator to the list of validators:

from django.contrib.auth.forms import UsernameField
from django.contrib.auth.validators import ASCIIUsernameValidator

class ASCIIUsernameField(UsernameField):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.validators.append(ASCIIUsernameValidator())

Then on the Meta of your User creation form you can replace the form field class:

class UserCreationForm(forms.ModelForm):
    # field definitions...

    class Meta:
        model = User
        fields = ("username",)
        field_classes = {'username': ASCIIUsernameField}
Fixing the email uniqueness and making it mandatory

Here all you can do is to sanitize and handle the user input in all views where you user can modify its email address.

You have to include the email field on your sign up form/serializer as well.

Then just make it mandatory like this:

class UserCreationForm(forms.ModelForm):
    email = forms.EmailField(required=True)
    # other field definitions...

    class Meta:
        model = User
        fields = ("username",)
        field_classes = {'username': ASCIIUsernameField}

    def clean_email(self):
        email = self.cleaned_data.get("email")
        if User.objects.filter(email__iexact=email).exists():
            self.add_error("email", _("A user with this email already exists."))
        return email

You can also check a complete and detailed example of this form on the project shared together with this post: userworkarounds

Replacing the default User model

Now I’m going to show you how I usually like to extend and replace the default User model. It is a little bit verbose but that is the strategy that will allow you to access all the inner parts of the User model and make it better.

To replace the User model you have two options: extending the AbstractBaseUser or extending the AbstractUser.

To illustrate what that means I draw the following diagram of how the default Django model is implemented:

User Model Diagram

The green circle identified with the label User is actually the one you import from django.contrib.auth.models and that is the implementation that we discussed in this article.

If you look at the source code, its implementation looks like this:

class User(AbstractUser):
    class Meta(AbstractUser.Meta):
        swappable = 'AUTH_USER_MODEL'

So basically it is just an implementation of the AbstractUser. Meaning all the fields and logic are implemented in the abstract class.

It is done that way so we can easily extend the User model by creating a sub-class of the AbstractUser and add other features and fields you like.

But there is a limitation that you can’t override an existing model field. For example, you can re-define the email field to make it mandatory or to change its length.

So extending the AbstractUser class is only useful when you want to modify its methods, add more fields or swap the objects manager.

If you want to remove a field or change how the field is defined, you have to extend the user model from the AbstractBaseUser.

The best strategy to have full control over the user model is creating a new concrete class from the PermissionsMixin and the AbstractBaseUser.

Note that the PermissionsMixin is only necessary if you intend to use the Django admin or the built-in permissions framework. If you are not planning to use it you can leave it out. And in the future if things change you can add the mixin and migrate the model and you are ready to go.

So the implementation strategy looks like this:

Custom User Model Diagram

Now I’m going to show you my go-to implementation. I always use PostgreSQL which, in my opinion, is the best database to use with Django. At least it is the one with most support and features anyway. So I’m going to show an approach that use the PostgreSQL’s CITextExtension. Then I will show some options if you are using other database engines.

For this implementation I always create an app named accounts:

django-admin startapp accounts

Then before adding any code I like to create an empty migration to install the PostgreSQL extensions that we are going to use:

python manage.py makemigrations accounts --empty --name="postgres_extensions"

Inside the migrations directory of the accounts app you will find an empty migration called 0001_postgres_extensions.py.

Modify the file to include the extension installation:

migrations/0001_postgres_extensions.py

from django.contrib.postgres.operations import CITextExtension
from django.db import migrations

class Migration(migrations.Migration):

    dependencies = [
    ]

    operations = [
        CITextExtension()
    ]

Now let’s implement our model. Open the models.py file inside the accounts app.

I always grab the initial code directly from Django’s source on GitHub, copying the AbstractUser implementation, and modify it accordingly:

accounts/models.py

from django.contrib.auth.base_user import AbstractBaseUser
from django.contrib.auth.models import PermissionsMixin, UserManager
from django.contrib.auth.validators import ASCIIUsernameValidator
from django.contrib.postgres.fields import CICharField, CIEmailField
from django.core.mail import send_mail
from django.db import models
from django.utils import timezone
from django.utils.translation import gettext_lazy as _


class CustomUser(AbstractBaseUser, PermissionsMixin):
    username_validator = ASCIIUsernameValidator()

    username = CICharField(
        _("username"),
        max_length=150,
        unique=True,
        help_text=_("Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only."),
        validators=[username_validator],
        error_messages={
            "unique": _("A user with that username already exists."),
        },
    )
    first_name = models.CharField(_("first name"), max_length=150, blank=True)
    last_name = models.CharField(_("last name"), max_length=150, blank=True)
    email = CIEmailField(
        _("email address"),
        unique=True,
        error_messages={
            "unique": _("A user with that email address already exists."),
        },
    )
    is_staff = models.BooleanField(
        _("staff status"),
        default=False,
        help_text=_("Designates whether the user can log into this admin site."),
    )
    is_active = models.BooleanField(
        _("active"),
        default=True,
        help_text=_(
            "Designates whether this user should be treated as active. Unselect this instead of deleting accounts."
        ),
    )
    date_joined = models.DateTimeField(_("date joined"), default=timezone.now)

    objects = UserManager()

    EMAIL_FIELD = "email"
    USERNAME_FIELD = "username"
    REQUIRED_FIELDS = ["email"]

    class Meta:
        verbose_name = _("user")
        verbose_name_plural = _("users")

    def clean(self):
        super().clean()
        self.email = self.__class__.objects.normalize_email(self.email)

    def get_full_name(self):
        """
        Return the first_name plus the last_name, with a space in between.
        """
        full_name = "%s %s" % (self.first_name, self.last_name)
        return full_name.strip()

    def get_short_name(self):
        """Return the short name for the user."""
        return self.first_name

    def email_user(self, subject, message, from_email=None, **kwargs):
        """Send an email to this user."""
        send_mail(subject, message, from_email, [self.email], **kwargs)

Let’s review what we changed here:

  • We switched the username_validator to use ASCIIUsernameValidator
  • The username field now is using CICharField which is not case-sensitive
  • The email field is now mandatory, unique and is using CIEmailField which is not case-sensitive

On the settings module, add the following configuration:

settings.py

AUTH_USER_MODEL = "accounts.CustomUser"

Now we are ready to create our migrations:

python manage.py makemigrations 

Apply the migrations:

python manage.py migrate

And you should get a similar result if you are just creating your project and if there is no other models/apps:

Operations to perform:
  Apply all migrations: accounts, admin, auth, contenttypes, sessions
Running migrations:
  Applying contenttypes.0001_initial... OK
  Applying contenttypes.0002_remove_content_type_name... OK
  Applying auth.0001_initial... OK
  Applying auth.0002_alter_permission_name_max_length... OK
  Applying auth.0003_alter_user_email_max_length... OK
  Applying auth.0004_alter_user_username_opts... OK
  Applying auth.0005_alter_user_last_login_null... OK
  Applying auth.0006_require_contenttypes_0002... OK
  Applying auth.0007_alter_validators_add_error_messages... OK
  Applying auth.0008_alter_user_username_max_length... OK
  Applying auth.0009_alter_user_last_name_max_length... OK

If you check your database scheme you will see that there is no auth_user table (which is the default one), and now the user is stored on the table accounts_customuser:

Database Scheme

And all the Foreign Keys to the user model will be created pointing to this table. That’s why it is important to do it right in the beginning of your project, before you created the database scheme.

Now you have all the freedom. You can replace the first_name and last_name and use just one field called name. You could remove the username field and identify your User model with the email (then just make sure you change the property USERNAME_FIELD to email).

You can grab the source code on GitHub: customuser

Handling case-insensitive without PostgreSQL

If you are not using PostgreSQL and want to implement case-insensitive authentication and you have direct access to the User model, a nice hack is to create a custom manager for the User model, like this:

accounts/models.py

from django.contrib.auth.models import AbstractUser, UserManager

class CustomUserManager(UserManager):
    def get_by_natural_key(self, username):
        case_insensitive_username_field = '{}__iexact'.format(self.model.USERNAME_FIELD)
        return self.get(**{case_insensitive_username_field: username})

class CustomUser(AbstractBaseUser, PermissionsMixin):
    # all the fields, etc...

    objects = CustomUserManager()

    # meta, methods, etc...

Then you could also sanitize the username field on the clean() method to always save it as lowercase so you don’t have to bother having case variant/conflicting usernames:

def clean(self):
    super().clean()
    self.email = self.__class__.objects.normalize_email(self.email)
    self.username = self.username.lower()

Conclusions

In this tutorial we discussed a few caveats of the default User model implementation and presented a few options to address those issues.

The takeaway message here is: always replace the default User model.

If your project is already in production, don’t panic: there are ways to fix those issues following the recommendations in this post.

I also have two detailed blog posts on how to make the username field case-insensitive and other about how to extend the django user model:

You can also explore the source code presented in this post on GitHub:

27-06-2021

09:33

How to Start a Production-Ready Django Project [Simple is Better Than Complex]

In this tutorial I’m going to show you how I usually start and organize a new Django project nowadays. I’ve tried many different configurations and ways to organize the project, but for the past 4 years or so this has been consistently my go-to setup.

Please note that this is not intended to be a “best practice” guide or to fit every use case. It’s just the way I like to use Django and that’s also the way that I found that allow your project to grow in healthy way.

Index


Premises

Usually those are the premises I take into account when setting up a project:

  • Separation of code and configuration
  • Multiple environments (production, staging, development, local)
  • Local/development environment first
  • Internationalization and localization
  • Testing and documentation
  • Static checks and styling rules
  • Not all apps must be pluggable
  • Debugging and logging

Environments/Modes

Usually I work with three environment dimensions in my code: local, tests and production. I like to see it as a “mode” how I run the project. What dictates which mode I’m running the project is which settings.py I’m currently using.

Local

The local dimension always come first. It is the settings and setup that a developer will use on their local machine.

All the defaults and configurations must be done to attend the local development environment first.

The reason why I like to do it that way is that the project must be as simple as possible for a new hire to clone the repository, run the project and start coding.

The production environment usually will be configured and maintained by experienced developers and by those who are more familiar with the code base itself. And because the deployment should be automated, there is no reason for people being re-creating the production server over and over again. So it is perfectly fine for the production setup require a few extra steps and configuration.

Tests

The tests environment will be also available locally, so developers can test the code and run the static checks.

But the idea of the tests environment is to expose it to a CI environment like Travis CI, Circle CI, AWS Code Pipeline, etc.

It is a simple setup that you can install the project and run all the unit tests.

Production

The production dimension is the real deal. This is the environment that goes live without the testing and debugging utilities.

I also use this “mode” or dimension to run the staging server.

A staging server is where you roll out new features and bug fixes before applying to the production server.

The idea here is that your staging server should run in production mode, and the only difference is going to be your static/media server and database server. And this can be achieved just by changing the configuration to tell what is the database connection string for example.

But the main thing is that you should not have any conditional in your code that checks if it is the production or staging server. The project should run exactly in the same way as in production.


Project Configuration

Right from the beginning it is a good idea to setup a remote version control service. My go-to option is Git on GitHub. Usually I create the remote repository first then clone it on my local machine to get started.

Let’s say our project is called simple, after creating the repository on GitHub I will create a directory named simple on my local machine, then within the simple directory I will clone the repository, like shown on the structure below:

simple/
└── simple/  (git repo)

Then I create the virtualenv outside of the Git repository:

simple/
├── simple/
└── venv/

Then alongside the simple and venv directories I may place some other support files related to the project which I do not plan to commit to the Git repository.

The reason I do that is because it is more convenient to destroy and re-create/re-clone both the virtual environment or the repository itself.

It is also good to store your virtual environment outside of the git repository/project root so you don’t need to bother ignoring its path when using libs like flake8, isort, black, tox, etc.

You can also use tools like virtualenvwrapper to manage your virtual environments, but I prefer doing it that way because everything is in one place. And if I no longer need to keep a given project on my local machine, I can delete it completely without leaving behind anything related to the project on my machine.

The next step is installing Django inside the virtualenv so we can use the django-admin commands.

source venv/bin/activate
pip install django

Inside the simple directory (where the git repository was cloned) start a new project:

django-admin startproject simple .

Attention to the . in the end of the command. It is necessary to not create yet another directory called simple.

So now the structure should be something like this:

simple/                   <- (1) Wrapper directory with all project contents including the venv
├── simple/               <- (2) Project root and git repository
│   ├── .git/
│   ├── manage.py
│   └── simple/           <- (3) Project package, apps, templates, static, etc
│       ├── __init__.py
│       ├── asgi.py
│       ├── settings.py
│       ├── urls.py
│       └── wsgi.py
└── venv/

At this point I already complement the project package directory with three extra directories for templates, static and locale.

Both templates and static we are going to manage at a project-level and app-level. Those are refer to the global templates and static files.

The locale is necessary in case you are using i18n to translate your application to other languages. So here is where you are going to store the .mo and .po files.

So the structure now should be something like this:

simple/
├── simple/
│   ├── .git/
│   ├── manage.py
│   └── simple/
│       ├── locale/
│       ├── static/
│       ├── templates/
│       ├── __init__.py
│       ├── asgi.py
│       ├── settings.py
│       ├── urls.py
│       └── wsgi.py
└── venv/
Requirements

Inside the project root (2) I like to create a directory called requirements with all the .txt files, breaking down the project dependencies like this:

  • base.txt: Main dependencies, strictly necessary to make the project run. Common to all environments
  • tests.txt: Inherits from base.txt + test utilities
  • local.txt: Inherits from tests.txt + development utilities
  • production.txt: Inherits from base.txt + production only dependencies

Note that I do not have a staging.txt requirements file, that’s because the staging environment is going to use the production.txt requirements so we have an exact copy of the production environment.

simple/
├── simple/
│   ├── .git/
│   ├── manage.py
│   ├── requirements/
│   │   ├── base.txt
│   │   ├── local.txt
│   │   ├── production.txt
│   │   └── tests.txt
│   └── simple/
│       ├── locale/
│       ├── static/
│       ├── templates/
│       ├── __init__.py
│       ├── asgi.py
│       ├── settings.py
│       ├── urls.py
│       └── wsgi.py
└── venv/

Now let’s have a look inside each of those requirements file and what are the python libraries that I always use no matter what type of Django project I’m developing.

base.txt

dj-database-url==0.5.0
Django==3.2.4
psycopg2-binary==2.9.1
python-decouple==3.4
pytz==2021.1
  • dj-database-url: This is a very handy Django library to create an one line database connection string which is convenient for storing in .env files in a safe way
  • Django: Django itself
  • psycopg2-binary: PostgreSQL is my go-to database when working with Django. So I always have it here for all my environments
  • python-decouple: A typed environment variable manager to help protect sensitive data that goes to your settings.py module. It also helps with decoupling configuration from source code
  • pytz: For timezone aware datetime fields

tests.txt

-r base.txt

black==21.6b0
coverage==5.5
factory-boy==3.2.0
flake8==3.9.2
isort==5.9.1
tox==3.23.1

The -r base.txt inherits all the requirements defined in the base.txt file

  • black: A Python auto-formatter so you don’t have to bother with styling and formatting your code. It let you focus on what really matters while coding and doing code reviews
  • coverage: Lib to generate test coverage reports of your project
  • factory-boy: A model factory to help you setup complex test cases where the code you are testing rely on multiple models being set in a certain way
  • flake8: Checks for code complexity, PEPs, formatting rules, etc
  • isort: Auto-formatter for your imports so all imports are organized by blocks (standard library, Django, third-party, first-party, etc)
  • tox: I use tox as an interface for CI tools to run all code checks and unit tests

local.txt

-r tests.txt

django-debug-toolbar==3.2.1
ipython==7.25.0

The -r tests.txt inherits all the requirements defined in the base.txt and tests.txt file

  • django-debug-toolbar: 99% of the time I use it to debug the query count on complex views so you can optimize your database access
  • ipython: Improved Python shell. I use it all the time during the development phase to start some implementation or to inspect code

production.txt

-r base.txt

gunicorn==20.1.0
sentry-sdk==1.1.0

The -r base.txt inherits all the requirements defined in the base.txt file

  • gunicorn: A Python WSGI HTTP server for production used behind a proxy server like Nginx
  • sentry-sdk: Error reporting/logging tool to catch exceptions raised in production
Settings

Also following the environments and modes premise I like to setup multiple settings modules. Those are going to serve as the entry point to determine in which mode I’m running the project.

Inside the simple project package, I create a new directory called settings and break down the files like this:

simple/                       (1)
├── simple/                   (2)
│   ├── .git/
│   ├── manage.py
│   ├── requirements/
│   │   ├── base.txt
│   │   ├── local.txt
│   │   ├── production.txt
│   │   └── tests.txt
│   └── simple/              (3)
│       ├── locale/
│       ├── settings/
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── local.py
│       │   ├── production.py
│       │   └── tests.py
│       ├── static/
│       ├── templates/
│       ├── __init__.py
│       ├── asgi.py
│       ├── urls.py
│       └── wsgi.py
└── venv/

Note that I removed the settings.py that used to live inside the simple/ (3) directory.

The majority of the code will live inside the base.py settings module.

Everything that we can set only once in the base.py and change its value using python-decouple we should keep in the base.py and never repeat/override in the other settings modules.

After the removal of the main settings.py a nice touch is to modify the manage.py file to set the local.py as the default settings module so we can still run commands like python manage.py runserver without any further parameters:

manage.py

#!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys


def main():
    """Run administrative tasks."""
    os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'simple.settings.local')  # <- here!
    try:
        from django.core.management import execute_from_command_line
    except ImportError as exc:
        raise ImportError(
            "Couldn't import Django. Are you sure it's installed and "
            "available on your PYTHONPATH environment variable? Did you "
            "forget to activate a virtual environment?"
        ) from exc
    execute_from_command_line(sys.argv)


if __name__ == '__main__':
    main()

Now let’s have a look on each of those settings modules.

base.py

scroll to see all the file contents
from pathlib import Path

import dj_database_url
from decouple import Csv, config

BASE_DIR = Path(__file__).resolve().parent.parent


# ==============================================================================
# CORE SETTINGS
# ==============================================================================

SECRET_KEY = config("SECRET_KEY", default="django-insecure$simple.settings.local")

DEBUG = config("DEBUG", default=True, cast=bool)

ALLOWED_HOSTS = config("ALLOWED_HOSTS", default="127.0.0.1,localhost", cast=Csv())

INSTALLED_APPS = [
    "django.contrib.admin",
    "django.contrib.auth",
    "django.contrib.contenttypes",
    "django.contrib.sessions",
    "django.contrib.messages",
    "django.contrib.staticfiles",
]

DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"

ROOT_URLCONF = "simple.urls"

INTERNAL_IPS = ["127.0.0.1"]

WSGI_APPLICATION = "simple.wsgi.application"


# ==============================================================================
# MIDDLEWARE SETTINGS
# ==============================================================================

MIDDLEWARE = [
    "django.middleware.security.SecurityMiddleware",
    "django.contrib.sessions.middleware.SessionMiddleware",
    "django.middleware.common.CommonMiddleware",
    "django.middleware.csrf.CsrfViewMiddleware",
    "django.contrib.auth.middleware.AuthenticationMiddleware",
    "django.contrib.messages.middleware.MessageMiddleware",
    "django.middleware.clickjacking.XFrameOptionsMiddleware",
]


# ==============================================================================
# TEMPLATES SETTINGS
# ==============================================================================

TEMPLATES = [
    {
        "BACKEND": "django.template.backends.django.DjangoTemplates",
        "DIRS": [BASE_DIR / "templates"],
        "APP_DIRS": True,
        "OPTIONS": {
            "context_processors": [
                "django.template.context_processors.debug",
                "django.template.context_processors.request",
                "django.contrib.auth.context_processors.auth",
                "django.contrib.messages.context_processors.messages",
            ],
        },
    },
]


# ==============================================================================
# DATABASES SETTINGS
# ==============================================================================

DATABASES = {
    "default": dj_database_url.config(
        default=config("DATABASE_URL", default="postgres://simple:simple@localhost:5432/simple"),
        conn_max_age=600,
    )
}


# ==============================================================================
# AUTHENTICATION AND AUTHORIZATION SETTINGS
# ==============================================================================

AUTH_PASSWORD_VALIDATORS = [
    {
        "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
    },
    {
        "NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
    },
    {
        "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
    },
    {
        "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
    },
]


# ==============================================================================
# I18N AND L10N SETTINGS
# ==============================================================================

LANGUAGE_CODE = config("LANGUAGE_CODE", default="en-us")

TIME_ZONE = config("TIME_ZONE", default="UTC")

USE_I18N = True

USE_L10N = True

USE_TZ = True

LOCALE_PATHS = [BASE_DIR / "locale"]


# ==============================================================================
# STATIC FILES SETTINGS
# ==============================================================================

STATIC_URL = "/static/"

STATIC_ROOT = BASE_DIR.parent.parent / "static"

STATICFILES_DIRS = [BASE_DIR / "static"]

STATICFILES_FINDERS = (
    "django.contrib.staticfiles.finders.FileSystemFinder",
    "django.contrib.staticfiles.finders.AppDirectoriesFinder",
)


# ==============================================================================
# MEDIA FILES SETTINGS
# ==============================================================================

MEDIA_URL = "/media/"

MEDIA_ROOT = BASE_DIR.parent.parent / "media"



# ==============================================================================
# THIRD-PARTY SETTINGS
# ==============================================================================


# ==============================================================================
# FIRST-PARTY SETTINGS
# ==============================================================================

SIMPLE_ENVIRONMENT = config("SIMPLE_ENVIRONMENT", default="local")

A few comments on the overall base settings file contents:

  • The config() are from the python-decouple library. It is exposing the configuration to an environment variable and retrieving its value accordingly to the expected data type. Read more about python-decouple on this guide: How to Use Python Decouple
  • See how configurations like SECRET_KEY, DEBUG and ALLOWED_HOSTS defaults to local/development environment values. That means a new developer won’t need to set a local .env and provide some initial value to run locally
  • On the database settings block we are using the dj_database_url to translate this one line string to a Python dictionary as Django expects
  • Note that how on the MEDIA_ROOT we are navigating two directories up to create a media directory outside the git repository but inside our project workspace (inside the directory simple/ (1)). So everything is handy and we won’t be committing test uploads to our repository
  • In the end of the base.py settings I reserve two blocks for third-party Django libraries that I may install, such as Django Rest Framework or Django Crispy Forms. And the first-party settings refer to custom settings that I may create exclusively for our project. Usually I will prefix them with the project name, like SIMPLE_XXX

local.py

# flake8: noqa

from .base import *

INSTALLED_APPS += ["debug_toolbar"]

MIDDLEWARE.insert(0, "debug_toolbar.middleware.DebugToolbarMiddleware")


# ==============================================================================
# EMAIL SETTINGS
# ==============================================================================

EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend"

Here is where I will setup Django Debug Toolbar for example. Or set the email backend to display the sent emails on console instead of having to setup a valid email server to work on the project.

All the code that is only relevant for the development process goes here.

You can use it to setup other libs like Django Silk to run profiling without exposing it to production.

tests.py

# flake8: noqa

from .base import *

PASSWORD_HASHERS = ["django.contrib.auth.hashers.MD5PasswordHasher"]


class DisableMigrations:
    def __contains__(self, item):
        return True

    def __getitem__(self, item):
        return None


MIGRATION_MODULES = DisableMigrations()

Here I add configurations that help us run the test cases faster. Sometimes disabling the migrations may not work if you have interdependencies between the apps models so Django may fail to create a database without the migrations.

In some projects it is better to keep the test database after the execution.

production.py

# flake8: noqa

import sentry_sdk
from sentry_sdk.integrations.django import DjangoIntegration

import simple
from .base import *

# ==============================================================================
# SECURITY SETTINGS
# ==============================================================================

CSRF_COOKIE_SECURE = True
CSRF_COOKIE_HTTPONLY = True

SECURE_HSTS_SECONDS = 60 * 60 * 24 * 7 * 52  # one year
SECURE_HSTS_INCLUDE_SUBDOMAINS = True
SECURE_SSL_REDIRECT = True
SECURE_BROWSER_XSS_FILTER = True
SECURE_CONTENT_TYPE_NOSNIFF = True
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")

SESSION_COOKIE_SECURE = True


# ==============================================================================
# THIRD-PARTY APPS SETTINGS
# ==============================================================================

sentry_sdk.init(
    dsn=config("SENTRY_DSN", default=""),
    environment=SIMPLE_ENVIRONMENT,
    release="simple@%s" % simple.__version__,
    integrations=[DjangoIntegration()],
)

The most important part here on the production settings is to enable all the security settings Django offer. I like to do it that way because you can’t run the development server with most of those configurations on.

The other thing is the Sentry configuration.

Note the simple.__version__ on the release. Next we are going to explore how I usually manage the version of the project.

Version

I like to reuse Django’s get_version utility for a simple and PEP 440 complaint version identification.

Inside the project’s __init__.py module:

simple/
├── simple/
│   ├── .git/
│   ├── manage.py
│   ├── requirements/
│   └── simple/
│       ├── locale/
│       ├── settings/
│       ├── static/
│       ├── templates/
│       ├── __init__.py     <-- here!
│       ├── asgi.py
│       ├── urls.py
│       └── wsgi.py
└── venv/

You can do something like this:

from django import get_version

VERSION = (1, 0, 0, "final", 0)

__version__ = get_version(VERSION)

The only down side of using the get_version directly from the Django module is that it won’t be able to resolve the git hash for alpha versions.

A possible solution is making a copy of the django/utils/version.py file to your project, and then you import it locally, so it will be able to identify your git repository within the project folder.

But it also depends what kind of versioning you are using for your project. If the version of your project is not really relevant to the end user and you want to keep track of it for internal management like to identify the release on a Sentry issue, you could use a date-based release versioning.


Apps Configuration

A Django app is a Python package that you “install” using the INSTALLED_APPS in your settings file. An app can live pretty much anywhere: inside or outside the project package or even in a library that you installed using pip.

Indeed, your Django apps may be reusable on other projects. But that doesn’t mean it should. Don’t let it destroy your project design or don’t get obsessed over it. Also, it shouldn’t necessarily represent a “part” of your website/web application.

It is perfectly fine for some apps to not have models, or other apps have only views. Some of your modules doesn’t even need to be a Django app at all. I like to see my Django projects as a big Python package and organize it in a way that makes sense, and not try to place everything inside reusable apps.

The general recommendation of the official Django documentation is to place your apps in the project root (alongside the manage.py file, identified here in this tutorial by the simple/ (2) folder).

But actually I prefer to create my apps inside the project package (identified in this tutorial by the simple/ (3) folder). I create a module named apps and then inside the apps I create my Django apps. The main reason why is that it creates a nice namespace for the app. It helps you easily identify that a particular import is part of your project. Also this namespace helps when creating logging rules to handle events in a different way.

Here is an example of how I do it:

simple/                      (1)
├── simple/                  (2)
│   ├── .git/
│   ├── manage.py
│   ├── requirements/
│   └── simple/              (3)
│       ├── apps/            <-- here!
│       │   ├── __init__.py
│       │   ├── accounts/
│       │   └── core/
│       ├── locale/
│       ├── settings/
│       ├── static/
│       ├── templates/
│       ├── __init__.py
│       ├── asgi.py
│       ├── urls.py
│       └── wsgi.py
└── venv/

In the example above the folders accounts/ and core/ are Django apps created with the command django-admin startapp.

Those two apps are also always in my project. The accounts app is the one that I use the replace the default Django User model and also the place where I eventually create password reset, account activation, sign ups, etc.

The core app I use for general/global implementations. For example to define a model that will be used across most of the other apps. I try to keep it decoupled from other apps, not importing other apps resources. It usually is a good place to implement general purpose or reusable views and mixins.

Something to pay attention when using this approach is that you need to change the name of the apps configuration, inside the apps.py file of the Django app:

accounts/apps.py

from django.apps import AppConfig

class AccountsConfig(AppConfig):
    default_auto_field = 'django.db.models.BigAutoField'
    name = 'accounts'  # <- this is the default name created by the startapp command

You should rename it like this, to respect the namespace:

from django.apps import AppConfig

class AccountsConfig(AppConfig):
    default_auto_field = 'django.db.models.BigAutoField'
    name = 'simple.apps.accounts'  # <- change to this!

Then on your INSTALLED_APPS you are going to create a reference to your models like this:

INSTALLED_APPS = [
    "django.contrib.admin",
    "django.contrib.auth",
    "django.contrib.contenttypes",
    "django.contrib.sessions",
    "django.contrib.messages",
    "django.contrib.staticfiles",
    
    "simple.apps.accounts",
    "simple.apps.core",
]

The namespace also helps to organize your INSTALLED_APPS making your project apps easily recognizable.

App Structure

This is what my app structure looks like:

simple/                              (1)
├── simple/                          (2)
│   ├── .git/
│   ├── manage.py
│   ├── requirements/
│   └── simple/                      (3)
│       ├── apps/
│       │   ├── accounts/            <- My app structure
│       │   │   ├── migrations/
│       │   │   │   └── __init__.py
│       │   │   ├── static/
│       │   │   │   └── accounts/
│       │   │   ├── templates/
│       │   │   │   └── accounts/
│       │   │   ├── tests/
│       │   │   │   ├── __init__.py
│       │   │   │   └── factories.py
│       │   │   ├── __init__.py
│       │   │   ├── admin.py
│       │   │   ├── apps.py
│       │   │   ├── constants.py
│       │   │   ├── models.py
│       │   │   └── views.py
│       │   ├── core/
│       │   └── __init__.py
│       ├── locale/
│       ├── settings/
│       ├── static/
│       ├── templates/
│       ├── __init__.py
│       ├── asgi.py
│       ├── urls.py
│       └── wsgi.py
└── venv/

The first thing I do is create a folder named tests so I can break down my tests into several files. I always add a factories.py to create my model factories using the factory-boy library.

For both static and templates always create first a directory with the same name as the app to avoid name collisions when Django collect all static files and try to resolve the templates.

The admin.py may be there or not depending if I’m using the Django Admin contrib app.

Other common modules that you may have is a utils.py, forms.py, managers.py, services.py etc.


Code style and formatting

Now I’m going to show you the configuration that I use for tools like isort, black, flake8, coverage and tox.

Editor Config

The .editorconfig file is a standard recognized by all major IDEs and code editors. It helps the editor understand what is the file formatting rules used in the project.

It tells the editor if the project is indented with tabs or spaces. How many spaces/tabs. What’s the max length for a line of code.

I like to use Django’s .editorconfig file. Here is what it looks like:

.editorconfig

# https://editorconfig.org/

root = true

[*]
indent_style = space
indent_size = 4
insert_final_newline = true
trim_trailing_whitespace = true
end_of_line = lf
charset = utf-8

# Docstrings and comments use max_line_length = 79
[*.py]
max_line_length = 119

# Use 2 spaces for the HTML files
[*.html]
indent_size = 2

# The JSON files contain newlines inconsistently
[*.json]
indent_size = 2
insert_final_newline = ignore

[**/admin/js/vendor/**]
indent_style = ignore
indent_size = ignore

# Minified JavaScript files shouldn't be changed
[**.min.js]
indent_style = ignore
insert_final_newline = ignore

# Makefiles always use tabs for indentation
[Makefile]
indent_style = tab

# Batch files use tabs for indentation
[*.bat]
indent_style = tab

[docs/**.txt]
max_line_length = 79

[*.yml]
indent_size = 2
Flake8

Flake8 is a Python library that wraps PyFlakes, pycodestyle and Ned Batchelder’s McCabe script. It is a great toolkit for checking your code base against coding style (PEP8), programming errors (like “library imported but unused” and “Undefined name”) and to check cyclomatic complexity.

To learn more about flake8, check this tutorial I posted a while a go: How to Use Flake8.

setup.cfg

[flake8]
exclude = .git,.tox,*/migrations/*
max-line-length = 119
isort

isort is a Python utility / library to sort imports alphabetically, and automatically separated into sections.

To learn more about isort, check this tutorial I posted a while a go: How to Use Python isort Library.

setup.cfg

[isort]
force_grid_wrap = 0
use_parentheses = true
combine_as_imports = true
include_trailing_comma = true
line_length = 119
multi_line_output = 3
skip = migrations
default_section = THIRDPARTY
known_first_party = simple
known_django = django
sections=FUTURE,STDLIB,DJANGO,THIRDPARTY,FIRSTPARTY,LOCALFOLDER

Pay attention to the known_first_party, it should be the name of your project so isort can group your project’s imports.

Black

Black is a life changing library to auto-format your Python applications. There is no way I’m coding with Python nowadays without using Black.

Here is the basic configuration that I use:

pyproject.toml

[tool.black]
line-length = 119
target-version = ['py38']
include = '\.pyi?$'
exclude = '''
  /(
      \.eggs
    | \.git
    | \.hg
    | \.mypy_cache
    | \.tox
    | \.venv
    | _build
    | buck-out
    | build
    | dist
    | migrations
  )/
'''

Conclusions

In this tutorial I described my go-to project setup when working with Django. That’s pretty much how I start all my projects nowadays.

Here is the final project structure for reference:

simple/
├── simple/
│   ├── .git/
│   ├── .gitignore
│   ├── .editorconfig
│   ├── manage.py
│   ├── pyproject.toml
│   ├── requirements/
│   │   ├── base.txt
│   │   ├── local.txt
│   │   ├── production.txt
│   │   └── tests.txt
│   ├── setup.cfg
│   └── simple/
│       ├── __init__.py
│       ├── apps/
│       │   ├── accounts/
│       │   │   ├── migrations/
│       │   │   │   └── __init__.py
│       │   │   ├── static/
│       │   │   │   └── accounts/
│       │   │   ├── templates/
│       │   │   │   └── accounts/
│       │   │   ├── tests/
│       │   │   │   ├── __init__.py
│       │   │   │   └── factories.py
│       │   │   ├── __init__.py
│       │   │   ├── admin.py
│       │   │   ├── apps.py
│       │   │   ├── constants.py
│       │   │   ├── models.py
│       │   │   └── views.py
│       │   ├── core/
│       │   │   ├── migrations/
│       │   │   │   └── __init__.py
│       │   │   ├── static/
│       │   │   │   └── core/
│       │   │   ├── templates/
│       │   │   │   └── core/
│       │   │   ├── tests/
│       │   │   │   ├── __init__.py
│       │   │   │   └── factories.py
│       │   │   ├── __init__.py
│       │   │   ├── admin.py
│       │   │   ├── apps.py
│       │   │   ├── constants.py
│       │   │   ├── models.py
│       │   │   └── views.py
│       │   └── __init__.py
│       ├── locale/
│       ├── settings/
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── local.py
│       │   ├── production.py
│       │   └── tests.py
│       ├── static/
│       ├── templates/
│       ├── asgi.py
│       ├── urls.py
│       └── wsgi.py
└── venv/

You can also explore the code on GitHub: django-production-template.

04-03-2021

18:25

Zo installeer je Chrome OS op je (oude) computer [Laatste Artikelen - Webwereld]

Google timmert al jaren hard aan de weg met Chrome OS en brengt samen met verschillende computerfabrikanten Chrome-apparaten uit met dat besturingssysteem. Maar je hoeft niet per se een dedicated apparaat aan te schaffen, je kan het systeem ook zelf op je (oude) computer zetten en wij laten je zien hoe.

29-01-2021

12:47

How to Use Chart.js with Django [Simple is Better Than Complex]

Chart.js is a cool open source JavaScript library that helps you render HTML5 charts. It is responsive and counts with 8 different chart types.

In this tutorial we are going to explore a little bit of how to make Django talk with Chart.js and render some simple charts based on data extracted from our models.

Installation

For this tutorial all you are going to do is add the Chart.js lib to your HTML page:

<script src="https://cdn.jsdelivr.net/npm/chart.js@2.9.3/dist/Chart.min.js"></script>

You can download it from Chart.js official website and use it locally, or you can use it from a CDN using the URL above.

Example Scenario

I’m going to use the same example I used for the tutorial How to Create Group By Queries With Django ORM which is a good complement to this tutorial because actually the tricky part of working with charts is to transform the data so it can fit in a bar chart / line chart / etc.

We are going to use the two models below, Country and City:

class Country(models.Model):
    name = models.CharField(max_length=30)

class City(models.Model):
    name = models.CharField(max_length=30)
    country = models.ForeignKey(Country, on_delete=models.CASCADE)
    population = models.PositiveIntegerField()

And the raw data stored in the database:

cities
id name country_id population
1Tokyo2836,923,000
2Shanghai1334,000,000
3Jakarta1930,000,000
4Seoul2125,514,000
5Guangzhou1325,000,000
6Beijing1324,900,000
7Karachi2224,300,000
8Shenzhen1323,300,000
9Delhi2521,753,486
10Mexico City2421,339,781
11Lagos921,000,000
12São Paulo120,935,204
13Mumbai2520,748,395
14New York City2020,092,883
15Osaka2819,342,000
16Wuhan1319,000,000
17Chengdu1318,100,000
18Dhaka417,151,925
19Chongqing1317,000,000
20Tianjin1315,400,000
21Kolkata2514,617,882
22Tehran1114,595,904
23Istanbul214,377,018
24London2614,031,830
25Hangzhou1313,400,000
26Los Angeles2013,262,220
27Buenos Aires813,074,000
28Xi'an1312,900,000
29Paris612,405,426
30Changzhou1312,400,000
31Shantou1312,000,000
32Rio de Janeiro111,973,505
33Manila1811,855,975
34Nanjing1311,700,000
35Rhine-Ruhr1611,470,000
36Jinan1311,000,000
37Bangalore2510,576,167
38Harbin1310,500,000
39Lima79,886,647
40Zhengzhou139,700,000
41Qingdao139,600,000
42Chicago209,554,598
43Nagoya289,107,000
44Chennai258,917,749
45Bangkok158,305,218
46Bogotá277,878,783
47Hyderabad257,749,334
48Shenyang137,700,000
49Wenzhou137,600,000
50Nanchang137,400,000
51Hong Kong137,298,600
52Taipei297,045,488
53Dallas–Fort Worth206,954,330
54Santiago146,683,852
55Luanda236,542,944
56Houston206,490,180
57Madrid176,378,297
58Ahmedabad256,352,254
59Toronto56,055,724
60Philadelphia206,051,170
61Washington, D.C.206,033,737
62Miami205,929,819
63Belo Horizonte15,767,414
64Atlanta205,614,323
65Singapore125,535,000
66Barcelona175,445,616
67Munich165,203,738
68Stuttgart165,200,000
69Ankara25,150,072
70Hamburg165,100,000
71Pune255,049,968
72Berlin165,005,216
73Guadalajara244,796,050
74Boston204,732,161
75Sydney105,000,500
76San Francisco204,594,060
77Surat254,585,367
78Phoenix204,489,109
79Monterrey244,477,614
80Inland Empire204,441,890
81Rome34,321,244
82Detroit204,296,611
83Milan34,267,946
84Melbourne104,650,000
countries
id name
1Brazil
2Turkey
3Italy
4Bangladesh
5Canada
6France
7Peru
8Argentina
9Nigeria
10Australia
11Iran
12Singapore
13China
14Chile
15Thailand
16Germany
17Spain
18Philippines
19Indonesia
20United States
21South Korea
22Pakistan
23Angola
24Mexico
25India
26United Kingdom
27Colombia
28Japan
29Taiwan

Example 1: Pie Chart

For the first example we are only going to retrieve the top 5 most populous cities and render it as a pie chart. In this strategy we are going to return the chart data as part of the view context and inject the results in the JavaScript code using the Django Template language.

views.py

from django.shortcuts import render
from mysite.core.models import City

def pie_chart(request):
    labels = []
    data = []

    queryset = City.objects.order_by('-population')[:5]
    for city in queryset:
        labels.append(city.name)
        data.append(city.population)

    return render(request, 'pie_chart.html', {
        'labels': labels,
        'data': data,
    })

Basically in the view above we are iterating through the City queryset and building a list of labels and a list of data. Here in this case the data is the population count saved in the City model.

For the urls.py just a simple routing:

urls.py

from django.urls import path
from mysite.core import views

urlpatterns = [
    path('pie-chart/', views.pie_chart, name='pie-chart'),
]

Now the template. I got a basic snippet from the Chart.js Pie Chart Documentation.

pie_chart.html

{% extends 'base.html' %}

{% block content %}
  <div id="container" style="width: 75%;">
    <canvas id="pie-chart"></canvas>
  </div>

  <script src="https://cdn.jsdelivr.net/npm/chart.js@2.9.3/dist/Chart.min.js"></script>
  <script>

    var config = {
      type: 'pie',
      data: {
        datasets: [{
          data: {{ data|safe }},
          backgroundColor: [
            '#696969', '#808080', '#A9A9A9', '#C0C0C0', '#D3D3D3'
          ],
          label: 'Population'
        }],
        labels: {{ labels|safe }}
      },
      options: {
        responsive: true
      }
    };

    window.onload = function() {
      var ctx = document.getElementById('pie-chart').getContext('2d');
      window.myPie = new Chart(ctx, config);
    };

  </script>

{% endblock %}

In the example above the base.html template is not important but you can see it in the code example I shared in the end of this post.

This strategy is not ideal but works fine. The bad thing is that we are using the Django Template Language to interfere with the JavaScript logic. When we put {{ data|safe}} we are injecting a variable that came from the server directly in the JavaScript code.

The code above looks like this:

Pie Chart


Example 2: Bar Chart with Ajax

As the title says, we are now going to render a bar chart using an async call.

views.py

from django.shortcuts import render
from django.db.models import Sum
from django.http import JsonResponse
from mysite.core.models import City

def home(request):
    return render(request, 'home.html')

def population_chart(request):
    labels = []
    data = []

    queryset = City.objects.values('country__name').annotate(country_population=Sum('population')).order_by('-country_population')
    for entry in queryset:
        labels.append(entry['country__name'])
        data.append(entry['country_population'])
    
    return JsonResponse(data={
        'labels': labels,
        'data': data,
    })

So here we are using two views. The home view would be the main page where the chart would be loaded at. The other view population_chart would be the one with the sole responsibility to aggregate the data the return a JSON response with the labels and data.

If you are wondering about what this queryset is doing, it is grouping the cities by the country and aggregating the total population of each country. The result is going to be a list of country + total population. To learn more about this kind of query have a look on this post: How to Create Group By Queries With Django ORM

urls.py

from django.urls import path
from mysite.core import views

urlpatterns = [
    path('', views.home, name='home'),
    path('population-chart/', views.population_chart, name='population-chart'),
]

home.html

{% extends 'base.html' %}

{% block content %}

  <div id="container" style="width: 75%;">
    <canvas id="population-chart" data-url="{% url 'population-chart' %}"></canvas>
  </div>

  <script src="https://code.jquery.com/jquery-3.4.1.min.js"></script>
  <script src="https://cdn.jsdelivr.net/npm/chart.js@2.9.3/dist/Chart.min.js"></script>
  <script>

    $(function () {

      var $populationChart = $("#population-chart");
      $.ajax({
        url: $populationChart.data("url"),
        success: function (data) {

          var ctx = $populationChart[0].getContext("2d");

          new Chart(ctx, {
            type: 'bar',
            data: {
              labels: data.labels,
              datasets: [{
                label: 'Population',
                backgroundColor: 'blue',
                data: data.data
              }]          
            },
            options: {
              responsive: true,
              legend: {
                position: 'top',
              },
              title: {
                display: true,
                text: 'Population Bar Chart'
              }
            }
          });

        }
      });

    });

  </script>

{% endblock %}

Now we have a better separation of concerns. Looking at the chart container:

<canvas id="population-chart" data-url="{% url 'population-chart' %}"></canvas>

We added a reference to the URL that holds the chart rendering logic. Later on we are using it to execute the Ajax call.

var $populationChart = $("#population-chart");
$.ajax({
  url: $populationChart.data("url"),
  success: function (data) {
    // ...
  }
});

Inside the success callback we then finally execute the Chart.js related code using the JsonResponse data.

Bar Chart


Conclusions

I hope this tutorial helped you to get started with working with charts using Chart.js. I published another tutorial on the same subject a while ago but using the Highcharts library. The approach is pretty much the same: How to Integrate Highcharts.js with Django.

If you want to grab the code I used in this tutorial you can find it here: github.com/sibtc/django-chartjs-example.

How to Save Extra Data to a Django REST Framework Serializer [Simple is Better Than Complex]

In this tutorial you are going to learn how to pass extra data to your serializer, before saving it to the database.

Introduction

When using regular Django forms, there is this common pattern where we save the form with commit=False and then pass some extra data to the instance before saving it to the database, like this:

form = InvoiceForm(request.POST)
if form.is_valid():
    invoice = form.save(commit=False)
    invoice.user = request.user
    invoice.save()

This is very useful because we can save the required information using only one database query and it also make it possible to handle not nullable columns that was not defined in the form.

To simulate this pattern using a Django REST Framework serializer you can do something like this:

serializer = InvoiceSerializer(data=request.data)
if serializer.is_valid():
    serializer.save(user=request.user)

You can also pass several parameters at once:

serializer = InvoiceSerializer(data=request.data)
if serializer.is_valid():
    serializer.save(user=request.user, date=timezone.now(), status='sent')

Example Using APIView

In this example I created an app named core.

models.py

from django.contrib.auth.models import User
from django.db import models

class Invoice(models.Model):
    SENT = 1
    PAID = 2
    VOID = 3
    STATUS_CHOICES = (
        (SENT, 'sent'),
        (PAID, 'paid'),
        (VOID, 'void'),
    )

    user = models.ForeignKey(User, on_delete=models.CASCADE, related_name='invoices')
    number = models.CharField(max_length=30)
    date = models.DateTimeField(auto_now_add=True)
    status = models.PositiveSmallIntegerField(choices=STATUS_CHOICES)
    amount = models.DecimalField(max_digits=10, decimal_places=2)

serializers.py

from rest_framework import serializers
from core.models import Invoice

class InvoiceSerializer(serializers.ModelSerializer):
    class Meta:
        model = Invoice
        fields = ('number', 'amount')

views.py

from rest_framework import status
from rest_framework.response import Response
from rest_framework.views import APIView
from core.models import Invoice
from core.serializers import InvoiceSerializer

class InvoiceAPIView(APIView):
    def post(self, request):
        serializer = InvoiceSerializer(data=request.data)
        serializer.is_valid(raise_exception=True)
        serializer.save(user=request.user, status=Invoice.SENT)
        return Response(status=status.HTTP_201_CREATED)

Example Using ViewSet

Very similar example, using the same models.py and serializers.py as in the previous example.

views.py

from rest_framework.viewsets import ModelViewSet
from core.models import Invoice
from core.serializers import InvoiceSerializer

class InvoiceViewSet(ModelViewSet):
    queryset = Invoice.objects.all()
    serializer_class = InvoiceSerializer

    def perform_create(self, serializer):
        serializer.save(user=self.request.user, status=Invoice.SENT)

How to Use Date Picker with Django [Simple is Better Than Complex]

In this tutorial we are going to explore three date/datetime pickers options that you can easily use in a Django project. We are going to explore how to do it manually first, then how to set up a custom widget and finally how to use a third-party Django app with support to datetime pickers.


Introduction

The implementation of a date picker is mostly done on the front-end.

The key part of the implementation is to assure Django will receive the date input value in the correct format, and also that Django will be able to reproduce the format when rendering a form with initial data.

We can also use custom widgets to provide a deeper integration between the front-end and back-end and also to promote better reuse throughout a project.

In the next sections we are going to explore following date pickers:

Tempus Dominus Bootstrap 4 Docs Source

Tempus Dominus Bootstrap 4

XDSoft DateTimePicker Docs Source

XDSoft DateTimePicker

Fengyuan Chen’s Datepicker Docs Source

Fengyuan Chen's Datepicker


Tempus Dominus Bootstrap 4

Docs Source

This is a great JavaScript library and it integrate well with Bootstrap 4. The downside is that it requires moment.js and sort of need Font-Awesome for the icons.

It only make sense to use this library with you are already using Bootstrap 4 + jQuery, otherwise the list of CSS and JS may look a little bit overwhelming.

To install it you can use their CDN or download the latest release from their GitHub Releases page.

If you downloaded the code from the releases page, grab the processed code from the build/ folder.

Below, a static HTML example of the datepicker:

<!doctype html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
    <title>Static Example</title>

    <!-- Bootstrap 4 -->
    <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.2.1/css/bootstrap.min.css" integrity="sha384-GJzZqFGwb1QTTN6wy59ffF1BuGJpLSa9DkKMp0DgiMDm4iYMj70gZWKYbI706tWS" crossorigin="anonymous">
    <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.6/umd/popper.min.js" integrity="sha384-wHAiFfRlMFy6i5SRaxvfOCifBUQy1xHdJ/yoi7FRNXMRBu5WHdZYu1hA6ZOblgut" crossorigin="anonymous"></script>
    <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.2.1/js/bootstrap.min.js" integrity="sha384-B0UglyR+jN6CkvvICOB2joaf5I4l3gm9GU6Hc1og6Ls7i6U/mkkaduKaBhlAXv9k" crossorigin="anonymous"></script>

    <!-- Font Awesome -->
    <link href="https://stackpath.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css" rel="stylesheet" integrity="sha384-wvfXpqpZZVQGK6TAh5PVlGOfQNHSoD2xbE+QkPxCAFlNEevoEH3Sl0sibVcOQVnN" crossorigin="anonymous">

    <!-- Moment.js -->
    <script src="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.23.0/moment.min.js" integrity="sha256-VBLiveTKyUZMEzJd6z2mhfxIqz3ZATCuVMawPZGzIfA=" crossorigin="anonymous"></script>

    <!-- Tempus Dominus Bootstrap 4 -->
    <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/tempusdominus-bootstrap-4/5.1.2/css/tempusdominus-bootstrap-4.min.css" integrity="sha256-XPTBwC3SBoWHSmKasAk01c08M6sIA5gF5+sRxqak2Qs=" crossorigin="anonymous" />
    <script src="https://cdnjs.cloudflare.com/ajax/libs/tempusdominus-bootstrap-4/5.1.2/js/tempusdominus-bootstrap-4.min.js" integrity="sha256-z0oKYg6xiLq3yJGsp/LsY9XykbweQlHl42jHv2XTBz4=" crossorigin="anonymous"></script>

  </head>
  <body>

    <div class="input-group date" id="datetimepicker1" data-target-input="nearest">
      <input type="text" class="form-control datetimepicker-input" data-target="#datetimepicker1"/>
      <div class="input-group-append" data-target="#datetimepicker1" data-toggle="datetimepicker">
        <div class="input-group-text"><i class="fa fa-calendar"></i></div>
      </div>
    </div>

    <script>
      $(function () {
        $("#datetimepicker1").datetimepicker();
      });
    </script>

  </body>
</html>
Direct Usage

The challenge now is to have this input snippet integrated with a Django form.

forms.py

from django import forms

class DateForm(forms.Form):
    date = forms.DateTimeField(
        input_formats=['%d/%m/%Y %H:%M'],
        widget=forms.DateTimeInput(attrs={
            'class': 'form-control datetimepicker-input',
            'data-target': '#datetimepicker1'
        })
    )

template

<div class="input-group date" id="datetimepicker1" data-target-input="nearest">
  {{ form.date }}
  <div class="input-group-append" data-target="#datetimepicker1" data-toggle="datetimepicker">
    <div class="input-group-text"><i class="fa fa-calendar"></i></div>
  </div>
</div>

<script>
  $(function () {
    $("#datetimepicker1").datetimepicker({
      format: 'DD/MM/YYYY HH:mm',
    });
  });
</script>

The script tag can be placed anywhere because the snippet $(function () { ... }); will run the datetimepicker initialization when the page is ready. The only requirement is that this script tag is placed after the jQuery script tag.

Custom Widget

You can create the widget in any app you want, here I’m going to consider we have a Django app named core.

core/widgets.py

from django.forms import DateTimeInput

class BootstrapDateTimePickerInput(DateTimeInput):
    template_name = 'widgets/bootstrap_datetimepicker.html'

    def get_context(self, name, value, attrs):
        datetimepicker_id = 'datetimepicker_{name}'.format(name=name)
        if attrs is None:
            attrs = dict()
        attrs['data-target'] = '#{id}'.format(id=datetimepicker_id)
        attrs['class'] = 'form-control datetimepicker-input'
        context = super().get_context(name, value, attrs)
        context['widget']['datetimepicker_id'] = datetimepicker_id
        return context

In the implementation above we generate a unique ID datetimepicker_id and also include it in the widget context.

Then the front-end implementation is done inside the widget HTML snippet.

widgets/bootstrap_datetimepicker.html

<div class="input-group date" id="{{ widget.datetimepicker_id }}" data-target-input="nearest">
  {% include "django/forms/widgets/input.html" %}
  <div class="input-group-append" data-target="#{{ widget.datetimepicker_id }}" data-toggle="datetimepicker">
    <div class="input-group-text"><i class="fa fa-calendar"></i></div>
  </div>
</div>

<script>
  $(function () {
    $("#{{ widget.datetimepicker_id }}").datetimepicker({
      format: 'DD/MM/YYYY HH:mm',
    });
  });
</script>

Note how we make use of the built-in django/forms/widgets/input.html template.

Now the usage:

core/forms.py

from .widgets import BootstrapDateTimePickerInput

class DateForm(forms.Form):
    date = forms.DateTimeField(
        input_formats=['%d/%m/%Y %H:%M'], 
        widget=BootstrapDateTimePickerInput()
    )

Now simply render the field:

template

{{ form.date }}

The good thing about having the widget is that your form could have several date fields using the widget and you could simply render the whole form like:

<form method="post">
  {% csrf_token %}
  {{ form.as_p }}
  <input type="submit" value="Submit">
</form>

XDSoft DateTimePicker

Docs Source

The XDSoft DateTimePicker is a very versatile date picker and doesn’t rely on moment.js or Bootstrap, although it looks good in a Bootstrap website.

It is easy to use and it is very straightforward.

You can download the source from GitHub releases page.

Below, a static example so you can see the minimum requirements and how all the pieces come together:

<!doctype html>
<html lang="en">
<head>
  <meta charset="utf-8">
  <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
  <title>Static Example</title>

  <!-- jQuery -->
  <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>

  <!-- XDSoft DateTimePicker -->
  <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/jquery-datetimepicker/2.5.20/jquery.datetimepicker.min.css" integrity="sha256-DOS9W6NR+NFe1fUhEE0PGKY/fubbUCnOfTje2JMDw3Y=" crossorigin="anonymous" />
  <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery-datetimepicker/2.5.20/jquery.datetimepicker.full.min.js" integrity="sha256-FEqEelWI3WouFOo2VWP/uJfs1y8KJ++FLh2Lbqc8SJk=" crossorigin="anonymous"></script>
</head>
<body>

  <input id="datetimepicker" type="text">

  <script>
    $(function () {
      $("#datetimepicker").datetimepicker();
    });
  </script>

</body>
</html>
Direct Usage

A basic integration with Django would look like this:

forms.py

from django import forms

class DateForm(forms.Form):
    date = forms.DateTimeField(input_formats=['%d/%m/%Y %H:%M'])

Simple form, default widget, nothing special.

Now using it on the template:

template

{{ form.date }}

<script>
  $(function () {
    $("#id_date").datetimepicker({
      format: 'd/m/Y H:i',
    });
  });
</script>

The id_date is the default ID Django generates for the form fields (id_ + name).

Custom Widget

core/widgets.py

from django.forms import DateTimeInput

class XDSoftDateTimePickerInput(DateTimeInput):
    template_name = 'widgets/xdsoft_datetimepicker.html'

widgets/xdsoft_datetimepicker.html

{% include "django/forms/widgets/input.html" %}

<script>
  $(function () {
    $("input[name='{{ widget.name }}']").datetimepicker({
      format: 'd/m/Y H:i',
    });
  });
</script>

To have a more generic implementation, this time we are selecting the field to initialize the component using its name instead of its id, should the user change the id prefix.

Now the usage:

core/forms.py

from django import forms
from .widgets import XDSoftDateTimePickerInput

class DateForm(forms.Form):
    date = forms.DateTimeField(
        input_formats=['%d/%m/%Y %H:%M'], 
        widget=XDSoftDateTimePickerInput()
    )

template

{{ form.date }}

Fengyuan Chen’s Datepicker

Docs Source

This is a very beautiful and minimalist date picker. Unfortunately there is no time support. But if you only need dates this is a great choice.

To install this datepicker you can either use their CDN or download the sources from their GitHub releases page. Please note that they do not provide a compiled/processed JavaScript files. But you can download those to your local machine using the CDN.

<!doctype html>
<html lang="en">
<head>
  <meta charset="utf-8">
  <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
  <title>Static Example</title>
  <style>body {font-family: Arial, sans-serif;}</style>
  
  <!-- jQuery -->
  <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>

  <!-- Fengyuan Chen's Datepicker -->
  <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/datepicker/0.6.5/datepicker.min.css" integrity="sha256-b88RdwbRJEzRx95nCuuva+hO5ExvXXnpX+78h8DjyOE=" crossorigin="anonymous" />
  <script src="https://cdnjs.cloudflare.com/ajax/libs/datepicker/0.6.5/datepicker.min.js" integrity="sha256-/7FLTdzP6CfC1VBAj/rsp3Rinuuu9leMRGd354hvk0k=" crossorigin="anonymous"></script>
</head>
<body>

  <input id="datepicker">

  <script>
    $(function () {
      $("#datepicker").datepicker();
    });
  </script>

</body>
</html>
Direct Usage

A basic integration with Django (note that we are now using DateField instead of DateTimeField):

forms.py

from django import forms

class DateForm(forms.Form):
    date = forms.DateTimeField(input_formats=['%d/%m/%Y %H:%M'])

template

{{ form.date }}

<script>
  $(function () {
    $("#id_date").datepicker({
      format:'dd/mm/yyyy',
    });
  });
</script>
Custom Widget

core/widgets.py

from django.forms import DateInput

class FengyuanChenDatePickerInput(DateInput):
    template_name = 'widgets/fengyuanchen_datepicker.html'

widgets/fengyuanchen_datepicker.html

{% include "django/forms/widgets/input.html" %}

<script>
  $(function () {
    $("input[name='{{ widget.name }}']").datepicker({
      format:'dd/mm/yyyy',
    });
  });
</script>

Usage:

core/forms.py

from django import forms
from .widgets import FengyuanChenDatePickerInput

class DateForm(forms.Form):
    date = forms.DateTimeField(
        input_formats=['%d/%m/%Y %H:%M'], 
        widget=FengyuanChenDatePickerInput()
    )

template

{{ form.date }}

Conclusions

The implementation is very similar no matter what date/datetime picker you are using. Hopefully this tutorial provided some insights on how to integrate this kind of frontend library to a Django project.

As always, the best source of information about each of those libraries are their official documentation.

I also created an example project to show the usage and implementation of the widgets for each of the libraries presented in this tutorial. Grab the source code at github.com/sibtc/django-datetimepicker-example.

How to Implement Grouped Model Choice Field [Simple is Better Than Complex]

The Django forms API have two field types to work with multiple options: ChoiceField and ModelChoiceField.

Both use select input as the default widget and they work in a similar way, except that ModelChoiceField is designed to handle QuerySets and work with foreign key relationships.

A basic implementation using a ChoiceField would be:

class ExpenseForm(forms.Form):
    CHOICES = (
        (11, 'Credit Card'),
        (12, 'Student Loans'),
        (13, 'Taxes'),
        (21, 'Books'),
        (22, 'Games'),
        (31, 'Groceries'),
        (32, 'Restaurants'),
    )
    amount = forms.DecimalField()
    date = forms.DateField()
    category = forms.ChoiceField(choices=CHOICES)
Django ChoiceField

Grouped Choice Field

You can also organize the choices in groups to generate the <optgroup> tags like this:

class ExpenseForm(forms.Form):
    CHOICES = (
        ('Debt', (
            (11, 'Credit Card'),
            (12, 'Student Loans'),
            (13, 'Taxes'),
        )),
        ('Entertainment', (
            (21, 'Books'),
            (22, 'Games'),
        )),
        ('Everyday', (
            (31, 'Groceries'),
            (32, 'Restaurants'),
        )),
    )
    amount = forms.DecimalField()
    date = forms.DateField()
    category = forms.ChoiceField(choices=CHOICES)
Django Grouped ChoiceField

Grouped Model Choice Field

When you are using a ModelChoiceField unfortunately there is no built-in solution.

Recently I found a nice solution on Django’s ticket tracker, where someone proposed adding an opt_group argument to the ModelChoiceField.

While the discussion is still ongoing, Simon Charette proposed a really good solution.

Let’s see how we can integrate it in our project.

First consider the following models:

models.py

from django.db import models

class Category(models.Model):
    name = models.CharField(max_length=30)
    parent = models.ForeignKey('Category', on_delete=models.CASCADE, null=True)

    def __str__(self):
        return self.name

class Expense(models.Model):
    amount = models.DecimalField(max_digits=10, decimal_places=2)
    date = models.DateField()
    category = models.ForeignKey(Category, on_delete=models.CASCADE)

    def __str__(self):
        return self.amount

So now our category instead of being a regular choices field it is now a model and the Expense model have a relationship with it using a foreign key.

If we create a ModelForm using this model, the result will be very similar to our first example.

To simulate a grouped categories you will need the code below. First create a new module named fields.py:

fields.py

from functools import partial
from itertools import groupby
from operator import attrgetter

from django.forms.models import ModelChoiceIterator, ModelChoiceField


class GroupedModelChoiceIterator(ModelChoiceIterator):
    def __init__(self, field, groupby):
        self.groupby = groupby
        super().__init__(field)

    def __iter__(self):
        if self.field.empty_label is not None:
            yield ("", self.field.empty_label)
        queryset = self.queryset
        # Can't use iterator() when queryset uses prefetch_related()
        if not queryset._prefetch_related_lookups:
            queryset = queryset.iterator()
        for group, objs in groupby(queryset, self.groupby):
            yield (group, [self.choice(obj) for obj in objs])


class GroupedModelChoiceField(ModelChoiceField):
    def __init__(self, *args, choices_groupby, **kwargs):
        if isinstance(choices_groupby, str):
            choices_groupby = attrgetter(choices_groupby)
        elif not callable(choices_groupby):
            raise TypeError('choices_groupby must either be a str or a callable accepting a single argument')
        self.iterator = partial(GroupedModelChoiceIterator, groupby=choices_groupby)
        super().__init__(*args, **kwargs)

And here is how you use it in your forms:

forms.py

from django import forms
from .fields import GroupedModelChoiceField
from .models import Category, Expense

class ExpenseForm(forms.ModelForm):
    category = GroupedModelChoiceField(
        queryset=Category.objects.exclude(parent=None), 
        choices_groupby='parent'
    )

    class Meta:
        model = Expense
        fields = ('amount', 'date', 'category')
Django Grouped ModelChoiceField

Because in the example above I used a self-referencing relationship I had to add the exclude(parent=None) to hide the “group categories” from showing up in the select input as a valid option.


Further Reading

You can download the code used in this tutorial from GitHub: github.com/sibtc/django-grouped-choice-field-example

Credits to the solution Simon Charette on Django Ticket Track.

How to Use JWT Authentication with Django REST Framework [Simple is Better Than Complex]

JWT stand for JSON Web Token and it is an authentication strategy used by client/server applications where the client is a Web application using JavaScript and some frontend framework like Angular, React or VueJS.

In this tutorial we are going to explore the specifics of JWT authentication. If you want to learn more about Token-based authentication using Django REST Framework (DRF), or if you want to know how to start a new DRF project you can read this tutorial: How to Implement Token Authentication using Django REST Framework. The concepts are the same, we are just going to switch the authentication backend.


How JWT Works?

The JWT is just an authorization token that should be included in all requests:

curl http://127.0.0.1:8000/hello/ -H 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQzODI4NDMxLCJqdGkiOiI3ZjU5OTdiNzE1MGQ0NjU3OWRjMmI0OTE2NzA5N2U3YiIsInVzZXJfaWQiOjF9.Ju70kdcaHKn1Qaz8H42zrOYk0Jx9kIckTn9Xx7vhikY'

The JWT is acquired by exchanging an username + password for an access token and an refresh token.

The access token is usually short-lived (expires in 5 min or so, can be customized though).

The refresh token lives a little bit longer (expires in 24 hours, also customizable). It is comparable to an authentication session. After it expires, you need a full login with username + password again.

Why is that?

It’s a security feature and also it’s because the JWT holds a little bit more information. If you look closely the example I gave above, you will see the token is composed by three parts:

xxxxx.yyyyy.zzzzz

Those are three distinctive parts that compose a JWT:

header.payload.signature

So we have here:

header = eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9
payload = eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQzODI4NDMxLCJqdGkiOiI3ZjU5OTdiNzE1MGQ0NjU3OWRjMmI0OTE2NzA5N2U3YiIsInVzZXJfaWQiOjF9
signature = Ju70kdcaHKn1Qaz8H42zrOYk0Jx9kIckTn9Xx7vhikY

This information is encoded using Base64. If we decode, we will see something like this:

header

{
  "typ": "JWT",
  "alg": "HS256"
}

payload

{
  "token_type": "access",
  "exp": 1543828431,
  "jti": "7f5997b7150d46579dc2b49167097e7b",
  "user_id": 1
}

signature

The signature is issued by the JWT backend, using the header base64 + payload base64 + SECRET_KEY. Upon each request this signature is verified. If any information in the header or in the payload was changed by the client it will invalidate the signature. The only way of checking and validating the signature is by using your application’s SECRET_KEY. Among other things, that’s why you should always keep your SECRET_KEY secret!


Installation & Setup

For this tutorial we are going to use the djangorestframework_simplejwt library, recommended by the DRF developers.

pip install djangorestframework_simplejwt

settings.py

REST_FRAMEWORK = {
    'DEFAULT_AUTHENTICATION_CLASSES': [
        'rest_framework_simplejwt.authentication.JWTAuthentication',
    ],
}

urls.py

from django.urls import path
from rest_framework_simplejwt import views as jwt_views

urlpatterns = [
    # Your URLs...
    path('api/token/', jwt_views.TokenObtainPairView.as_view(), name='token_obtain_pair'),
    path('api/token/refresh/', jwt_views.TokenRefreshView.as_view(), name='token_refresh'),
]

Example Code

For this tutorial I will use the following route and API view:

views.py

from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework.permissions import IsAuthenticated


class HelloView(APIView):
    permission_classes = (IsAuthenticated,)

    def get(self, request):
        content = {'message': 'Hello, World!'}
        return Response(content)

urls.py

from django.urls import path
from myapi.core import views

urlpatterns = [
    path('hello/', views.HelloView.as_view(), name='hello'),
]

Usage

I will be using HTTPie to consume the API endpoints via the terminal. But you can also use cURL (readily available in many OS) to try things out locally.

Or alternatively, use the DRF web interface by accessing the endpoint URLs like this:

DRF JWT Obtain Token

Obtain Token

First step is to authenticate and obtain the token. The endpoint is /api/token/ and it only accepts POST requests.

http post http://127.0.0.1:8000/api/token/ username=vitor password=123

HTTPie JWT Obtain Token

So basically your response body is the two tokens:

{
    "access": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQ1MjI0MjU5LCJqdGkiOiIyYmQ1NjI3MmIzYjI0YjNmOGI1MjJlNThjMzdjMTdlMSIsInVzZXJfaWQiOjF9.D92tTuVi_YcNkJtiLGHtcn6tBcxLCBxz9FKD3qzhUg8",
    "refresh": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoicmVmcmVzaCIsImV4cCI6MTU0NTMxMDM1OSwianRpIjoiMjk2ZDc1ZDA3Nzc2NDE0ZjkxYjhiOTY4MzI4NGRmOTUiLCJ1c2VyX2lkIjoxfQ.rA-mnGRg71NEW_ga0sJoaMODS5ABjE5HnxJDb0F8xAo"
}

After that you are going to store both the access token and the refresh token on the client side, usually in the localStorage.

In order to access the protected views on the backend (i.e., the API endpoints that require authentication), you should include the access token in the header of all requests, like this:

http http://127.0.0.1:8000/hello/ "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQ1MjI0MjAwLCJqdGkiOiJlMGQxZDY2MjE5ODc0ZTY3OWY0NjM0ZWU2NTQ2YTIwMCIsInVzZXJfaWQiOjF9.9eHat3CvRQYnb5EdcgYFzUyMobXzxlAVh_IAgqyvzCE"

HTTPie JWT Hello, World!

You can use this access token for the next five minutes.

After five min, the token will expire, and if you try to access the view again, you are going to get the following error:

http http://127.0.0.1:8000/hello/ "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNTQ1MjI0MjAwLCJqdGkiOiJlMGQxZDY2MjE5ODc0ZTY3OWY0NjM0ZWU2NTQ2YTIwMCIsInVzZXJfaWQiOjF9.9eHat3CvRQYnb5EdcgYFzUyMobXzxlAVh_IAgqyvzCE"

HTTPie JWT Expired

Refresh Token

To get a new access token, you should use the refresh token endpoint /api/token/refresh/ posting the refresh token:

http post http://127.0.0.1:8000/api/token/refresh/ refresh=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoicmVmcmVzaCIsImV4cCI6MTU0NTMwODIyMiwianRpIjoiNzAyOGFlNjc0ZTdjNDZlMDlmMzUwYjg3MjU1NGUxODQiLCJ1c2VyX2lkIjoxfQ.Md8AO3dDrQBvWYWeZsd_A1J39z6b6HEwWIUZ7ilOiPE

HTTPie JWT Refresh Token

The return is a new access token that you should use in the subsequent requests.

The refresh token is valid for the next 24 hours. When it finally expires too, the user will need to perform a full authentication again using their username and password to get a new set of access token + refresh token.


What’s The Point of The Refresh Token?

At first glance the refresh token may look pointless, but in fact it is necessary to make sure the user still have the correct permissions. If your access token have a long expire time, it may take longer to update the information associated with the token. That’s because the authentication check is done by cryptographic means, instead of querying the database and verifying the data. So some information is sort of cached.

There is also a security aspect, in a sense that the refresh token only travel in the POST data. And the access token is sent via HTTP header, which may be logged along the way. So this also give a short window, should your access token be compromised.


Further Reading

This should cover the basics on the backend implementation. It’s worth checking the djangorestframework_simplejwt settings for further customization and to get a better idea of what the library offers.

The implementation on the frontend depends on what framework/library you are using. Some libraries and articles covering popular frontend frameworks like angular/react/vue.js:

The code used in this tutorial is available at github.com/sibtc/drf-jwt-example.

Advanced Form Rendering with Django Crispy Forms [Simple is Better Than Complex]

[Django 2.1.3 / Python 3.6.5 / Bootstrap 4.1.3]

In this tutorial we are going to explore some of the Django Crispy Forms features to handle advanced/custom forms rendering. This blog post started as a discussion in our community forum, so I decided to compile the insights and solutions in a blog post to benefit a wider audience.

Table of Contents


Introduction

Throughout this tutorial we are going to implement the following Bootstrap 4 form using Django APIs:

Bootstrap 4 Form

This was taken from Bootstrap 4 official documentation as an example of how to use form rows.

NOTE!

The examples below refer to a base.html template. Consider the code below:

base.html

<!doctype html>
<html lang="en">
<head>
  <meta charset="utf-8">
  <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
  <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous">
</head>
<body>
  <div class="container">
    {% block content %}
    {% endblock %}
  </div>
</body>
</html>

Installation

Install it using pip:

pip install django-crispy-forms

Add it to your INSTALLED_APPS and select which styles to use:

settings.py

INSTALLED_APPS = [
    ...

    'crispy_forms',
]

CRISPY_TEMPLATE_PACK = 'bootstrap4'

For detailed instructions about how to install django-crispy-forms, please refer to this tutorial: How to Use Bootstrap 4 Forms With Django


Basic Form Rendering

The Python code required to represent the form above is the following:

from django import forms

STATES = (
    ('', 'Choose...'),
    ('MG', 'Minas Gerais'),
    ('SP', 'Sao Paulo'),
    ('RJ', 'Rio de Janeiro')
)

class AddressForm(forms.Form):
    email = forms.CharField(widget=forms.TextInput(attrs={'placeholder': 'Email'}))
    password = forms.CharField(widget=forms.PasswordInput())
    address_1 = forms.CharField(
        label='Address',
        widget=forms.TextInput(attrs={'placeholder': '1234 Main St'})
    )
    address_2 = forms.CharField(
        widget=forms.TextInput(attrs={'placeholder': 'Apartment, studio, or floor'})
    )
    city = forms.CharField()
    state = forms.ChoiceField(choices=STATES)
    zip_code = forms.CharField(label='Zip')
    check_me_out = forms.BooleanField(required=False)

In this case I’m using a regular Form, but it could also be a ModelForm based on a Django model with similar fields. The state field and the STATES choices could be either a foreign key or anything else. Here I’m just using a simple static example with three Brazilian states.

Template:

{% extends 'base.html' %}

{% block content %}
  <form method="post">
    {% csrf_token %}
    <table>{{ form.as_table }}</table>
    <button type="submit">Sign in</button>
  </form>
{% endblock %}

Rendered HTML:

Simple Django Form

Rendered HTML with validation state:

Simple Django Form Validation State


Basic Crispy Form Rendering

Same form code as in the example before.

Template:

{% extends 'base.html' %}

{% load crispy_forms_tags %}

{% block content %}
  <form method="post">
    {% csrf_token %}
    {{ form|crispy }}
    <button type="submit" class="btn btn-primary">Sign in</button>
  </form>
{% endblock %}

Rendered HTML:

Crispy Django Form

Rendered HTML with validation state:

Crispy Django Form Validation State


Custom Fields Placement with Crispy Forms

Same form code as in the first example.

Template:

{% extends 'base.html' %}

{% load crispy_forms_tags %}

{% block content %}
  <form method="post">
    {% csrf_token %}
    <div class="form-row">
      <div class="form-group col-md-6 mb-0">
        {{ form.email|as_crispy_field }}
      </div>
      <div class="form-group col-md-6 mb-0">
        {{ form.password|as_crispy_field }}
      </div>
    </div>
    {{ form.address_1|as_crispy_field }}
    {{ form.address_2|as_crispy_field }}
    <div class="form-row">
      <div class="form-group col-md-6 mb-0">
        {{ form.city|as_crispy_field }}
      </div>
      <div class="form-group col-md-4 mb-0">
        {{ form.state|as_crispy_field }}
      </div>
      <div class="form-group col-md-2 mb-0">
        {{ form.zip_code|as_crispy_field }}
      </div>
    </div>
    {{ form.check_me_out|as_crispy_field }}
    <button type="submit" class="btn btn-primary">Sign in</button>
  </form>
{% endblock %}

Rendered HTML:

Custom Crispy Django Form

Rendered HTML with validation state:

Custom Crispy Django Form Validation State


Crispy Forms Layout Helpers

We could use the crispy forms layout helpers to achieve the same result as above. The implementation is done inside the form __init__ method:

forms.py

from django import forms
from crispy_forms.helper import FormHelper
from crispy_forms.layout import Layout, Submit, Row, Column

STATES = (
    ('', 'Choose...'),
    ('MG', 'Minas Gerais'),
    ('SP', 'Sao Paulo'),
    ('RJ', 'Rio de Janeiro')
)

class AddressForm(forms.Form):
    email = forms.CharField(widget=forms.TextInput(attrs={'placeholder': 'Email'}))
    password = forms.CharField(widget=forms.PasswordInput())
    address_1 = forms.CharField(
        label='Address',
        widget=forms.TextInput(attrs={'placeholder': '1234 Main St'})
    )
    address_2 = forms.CharField(
        widget=forms.TextInput(attrs={'placeholder': 'Apartment, studio, or floor'})
    )
    city = forms.CharField()
    state = forms.ChoiceField(choices=STATES)
    zip_code = forms.CharField(label='Zip')
    check_me_out = forms.BooleanField(required=False)

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.helper = FormHelper()
        self.helper.layout = Layout(
            Row(
                Column('email', css_class='form-group col-md-6 mb-0'),
                Column('password', css_class='form-group col-md-6 mb-0'),
                css_class='form-row'
            ),
            'address_1',
            'address_2',
            Row(
                Column('city', css_class='form-group col-md-6 mb-0'),
                Column('state', css_class='form-group col-md-4 mb-0'),
                Column('zip_code', css_class='form-group col-md-2 mb-0'),
                css_class='form-row'
            ),
            'check_me_out',
            Submit('submit', 'Sign in')
        )

The template implementation is very minimal:

{% extends 'base.html' %}

{% load crispy_forms_tags %}

{% block content %}
  {% crispy form %}
{% endblock %}

The end result is the same.

Rendered HTML:

Custom Crispy Django Form

Rendered HTML with validation state:

Custom Crispy Django Form Validation State


Custom Crispy Field

You may also customize the field template and easily reuse throughout your application. Let’s say we want to use the custom Bootstrap 4 checkbox:

Bootstrap 4 Custom Checkbox

From the official documentation, the necessary HTML to output the input above:

<div class="custom-control custom-checkbox">
  <input type="checkbox" class="custom-control-input" id="customCheck1">
  <label class="custom-control-label" for="customCheck1">Check this custom checkbox</label>
</div>

Using the crispy forms API, we can create a new template for this custom field in our “templates” folder:

custom_checkbox.html

{% load crispy_forms_field %}

<div class="form-group">
  <div class="custom-control custom-checkbox">
    {% crispy_field field 'class' 'custom-control-input' %}
    <label class="custom-control-label" for="{{ field.id_for_label }}">{{ field.label }}</label>
  </div>
</div>

Now we can create a new crispy field, either in our forms.py module or in a new Python module named fields.py or something.

forms.py

from crispy_forms.layout import Field

class CustomCheckbox(Field):
    template = 'custom_checkbox.html'

We can use it now in our form definition:

forms.py

class CustomFieldForm(AddressForm):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.helper = FormHelper()
        self.helper.layout = Layout(
            Row(
                Column('email', css_class='form-group col-md-6 mb-0'),
                Column('password', css_class='form-group col-md-6 mb-0'),
                css_class='form-row'
            ),
            'address_1',
            'address_2',
            Row(
                Column('city', css_class='form-group col-md-6 mb-0'),
                Column('state', css_class='form-group col-md-4 mb-0'),
                Column('zip_code', css_class='form-group col-md-2 mb-0'),
                css_class='form-row'
            ),
            CustomCheckbox('check_me_out'),  # <-- Here
            Submit('submit', 'Sign in')
        )

(PS: the AddressForm was defined here and is the same as in the previous example.)

The end result:

Bootstrap 4 Custom Checkbox


Conclusions

There is much more Django Crispy Forms can do. Hopefully this tutorial gave you some extra insights on how to use the form helpers and layout classes. As always, the official documentation is the best source of information:

Django Crispy Forms layouts docs

Also, the code used in this tutorial is available on GitHub at github.com/sibtc/advanced-crispy-forms-examples.

How to Implement Token Authentication using Django REST Framework [Simple is Better Than Complex]

In this tutorial you are going to learn how to implement Token-based authentication using Django REST Framework (DRF). The token authentication works by exchanging username and password for a token that will be used in all subsequent requests so to identify the user on the server side.

The specifics of how the authentication is handled on the client side vary a lot depending on the technology/language/framework you are working with. The client could be a mobile application using iOS or Android. It could be a desktop application using Python or C++. It could be a Web application using PHP or Ruby.

But once you understand the overall process, it’s easier to find the necessary resources and documentation for your specific use case.

Token authentication is suitable for client-server applications, where the token is safely stored. You should never expose your token, as it would be (sort of) equivalent of a handing out your username and password.

Table of Contents


Setting Up The REST API Project

So let’s start from the very beginning. Install Django and DRF:

pip install django
pip install djangorestframework

Create a new Django project:

django-admin.py startproject myapi .

Navigate to the myapi folder:

cd myapi

Start a new app. I will call my app core:

django-admin.py startapp core

Here is what your project structure should look like:

myapi/
 |-- core/
 |    |-- migrations/
 |    |-- __init__.py
 |    |-- admin.py
 |    |-- apps.py
 |    |-- models.py
 |    |-- tests.py
 |    +-- views.py
 |-- __init__.py
 |-- settings.py
 |-- urls.py
 +-- wsgi.py
manage.py

Add the core app (you created) and the rest_framework app (you installed) to the INSTALLED_APPS, inside the settings.py module:

myapi/settings.py

INSTALLED_APPS = [
    # Django Apps
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',

    # Third-Party Apps
    'rest_framework',

    # Local Apps (Your project's apps)
    'myapi.core',
]

Return to the project root (the folder where the manage.py script is), and migrate the database:

python manage.py migrate

Let’s create our first API view just to test things out:

myapi/core/views.py

from rest_framework.views import APIView
from rest_framework.response import Response

class HelloView(APIView):
    def get(self, request):
        content = {'message': 'Hello, World!'}
        return Response(content)

Now register a path in the urls.py module:

myapi/urls.py

from django.urls import path
from myapi.core import views

urlpatterns = [
    path('hello/', views.HelloView.as_view(), name='hello'),
]

So now we have an API with just one endpoint /hello/ that we can perform GET requests. We can use the browser to consume this endpoint, just by accessing the URL http://127.0.0.1:8000/hello/:

Hello Endpoint HTML

We can also ask to receive the response as plain JSON data by passing the format parameter in the querystring like http://127.0.0.1:8000/hello/?format=json:

Hello Endpoint JSON

Both methods are fine to try out a DRF API, but sometimes a command line tool is more handy as we can play more easily with the requests headers. You can use cURL, which is widely available on all major Linux/macOS distributions:

curl http://127.0.0.1:8000/hello/

Hello Endpoint cURL

But usually I prefer to use HTTPie, which is a pretty awesome Python command line tool:

http http://127.0.0.1:8000/hello/

Hello Endpoint HTTPie

Now let’s protect this API endpoint so we can implement the token authentication:

myapi/core/views.py

from rest_framework.views import APIView
from rest_framework.response import Response
from rest_framework.permissions import IsAuthenticated  # <-- Here


class HelloView(APIView):
    permission_classes = (IsAuthenticated,)             # <-- And here

    def get(self, request):
        content = {'message': 'Hello, World!'}
        return Response(content)

Try again to access the API endpoint:

http http://127.0.0.1:8000/hello/

Hello Endpoint HTTPie Forbidden

And now we get an HTTP 403 Forbidden error. Now let’s implement the token authentication so we can access this endpoint.


Implementing the Token Authentication

We need to add two pieces of information in our settings.py module. First include rest_framework.authtoken to your INSTALLED_APPS and include the TokenAuthentication to REST_FRAMEWORK:

myapi/settings.py

INSTALLED_APPS = [
    # Django Apps
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',

    # Third-Party Apps
    'rest_framework',
    'rest_framework.authtoken',  # <-- Here

    # Local Apps (Your project's apps)
    'myapi.core',
]

REST_FRAMEWORK = {
    'DEFAULT_AUTHENTICATION_CLASSES': [
        'rest_framework.authentication.TokenAuthentication',  # <-- And here
    ],
}

Migrate the database to create the table that will store the authentication tokens:

python manage.py migrate

Migrate Auth Token

Now we need a user account. Let’s just create one using the manage.py command line utility:

python manage.py createsuperuser --username vitor --email vitor@example.com

The easiest way to generate a token, just for testing purpose, is using the command line utility again:

python manage.py drf_create_token vitor

drf_create_token

This piece of information, the random string 9054f7aa9305e012b3c2300408c3dfdf390fcddf is what we are going to use next to authenticate.

But now that we have the TokenAuthentication in place, let’s try to make another request to our /hello/ endpoint:

http http://127.0.0.1:8000/hello/

WWW-Authenticate Token

Notice how our API is now providing some extra information to the client on the required authentication method.

So finally, let’s use our token!

http http://127.0.0.1:8000/hello/ 'Authorization: Token 9054f7aa9305e012b3c2300408c3dfdf390fcddf'

REST Token Authentication

And that’s pretty much it. For now on, on all subsequent request you should include the header Authorization: Token 9054f7aa9305e012b3c2300408c3dfdf390fcddf.

The formatting looks weird and usually it is a point of confusion on how to set this header. It will depend on the client and how to set the HTTP request header.

For example, if we were using cURL, the command would be something like this:

curl http://127.0.0.1:8000/hello/ -H 'Authorization: Token 9054f7aa9305e012b3c2300408c3dfdf390fcddf'

Or if it was a Python requests call:

import requests

url = 'http://127.0.0.1:8000/hello/'
headers = {'Authorization': 'Token 9054f7aa9305e012b3c2300408c3dfdf390fcddf'}
r = requests.get(url, headers=headers)

Or if we were using Angular, you could implement an HttpInterceptor and set a header:

import { Injectable } from '@angular/core';
import { HttpRequest, HttpHandler, HttpEvent, HttpInterceptor } from '@angular/common/http';
import { Observable } from 'rxjs';

@Injectable()
export class AuthInterceptor implements HttpInterceptor {
  intercept(request: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> {
    const user = JSON.parse(localStorage.getItem('user'));
    if (user && user.token) {
      request = request.clone({
        setHeaders: {
          Authorization: `Token ${user.accessToken}`
        }
      });
    }
    return next.handle(request);
  }
}

User Requesting a Token

The DRF provide an endpoint for the users to request an authentication token using their username and password.

Include the following route to the urls.py module:

myapi/urls.py

from django.urls import path
from rest_framework.authtoken.views import obtain_auth_token  # <-- Here
from myapi.core import views

urlpatterns = [
    path('hello/', views.HelloView.as_view(), name='hello'),
    path('api-token-auth/', obtain_auth_token, name='api_token_auth'),  # <-- And here
]

So now we have a brand new API endpoint, which is /api-token-auth/. Let’s first inspect it:

http http://127.0.0.1:8000/api-token-auth/

API Token Auth

It doesn’t handle GET requests. Basically it’s just a view to receive a POST request with username and password.

Let’s try again:

http post http://127.0.0.1:8000/api-token-auth/ username=vitor password=123

API Token Auth POST

The response body is the token associated with this particular user. After this point you store this token and apply it to the future requests.

Then, again, the way you are going to make the POST request to the API depends on the language/framework you are using.

If this was an Angular client, you could store the token in the localStorage, if this was a Desktop CLI application you could store in a text file in the user’s home directory in a dot file.


Conclusions

Hopefully this tutorial provided some insights on how the token authentication works. I will try to follow up this tutorial providing some concrete examples of Angular applications, command line applications and Web clients as well.

It is important to note that the default Token implementation has some limitations such as only one token per user, no built-in way to set an expiry date to the token.

You can grab the code used in this tutorial at github.com/sibtc/drf-token-auth-example.

13-11-2020

17:24

30 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

29 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

28 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

27 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

26 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden

25 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

24 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

23 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

22 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

21 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

20 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

19 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

18 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

17 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

16 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

15 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

14 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

13 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

12 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

11 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

Python GUI applicatie consistent backups met fsarchiver [linux blogs franz ulenaers]

Python GUI applicatie consistent backups maken met fsarchiver



Een partitie van het type = "Linux LVM" kan gebruikt worden voor logische volumen maar ook als "snapshot" !
Een snapshot kan een exact kopie zijn van een logische volume dat bevrozen is op een bepaald ogenblik : dit maakt het mogelijk om consistente backups te maken van logische volumen
terwijl de logische volumen in gebruik zijn !





Mijn fysische en logische volumen zijn als volgt aangemaakt :

    fysische volume

      pvcreate /dev/sda1

    fysische volume groep

      vgcreate mydell /dev/sda1

    logische volumen

      lvcreate -L 1G -n boot mydell

      lvcreate -L 100G -n data mydell

      lvcreate -L 50G -n home mydell

      lvcreate -L 50G -n root mydell

      lvcreate -L 1G swap mydell







beginscherm

LVM Logische volumen [linux blogs franz ulenaers]

LVM = Logical Volume Manager



Een partitie van het type = "Linux LVM" kan gebruikt worden voor logische volumen maar ook als "snapshot" !
Een snapshot kan een exact kopie zijn van een logische volume dat bevrozen is op een bepaald ogenblik : dit maakt het mogelijk om consistente backups te maken van logische volumen
terwijl de logische volumen in gebruik zijn !

Hoe installeren ?

    sudo apt-get install lvm2



Creëer een fysisch volume voor een partitie

    commando = ‘pvcreate’ partitie

      voorbeeld :

        partitie moet van het type = "Linux LVM" zijn !

        pvcreate /dev/sda5



creëer een fysisch volume groep

    vgcreate vg_storage partitie

      voorbeeld

        vgcreate mijnvg /dev/sda5



voeg een logische volume toe in een volume groep

    lvcreate -L grootte_in_M/G -n logische_volume_naam volume_groep

      voorbeeld :

        lvcreate -L 30G -n mijnhome mijnvg



activeer een volume groep

    vgchange -a y naam_volume_groep

      voorbeeld :

        vgchange -a y mijnvg



Mijn fysische en logische volumen

    fysische volume

      pvcreate /dev/sda1

    fysische volume groep

      vgcreate mydell /dev/sda1

    logische volumen

      lvcreate -L 1G -n boot mydell

      lvcreate -L 100G -n data mydell

      lvcreate -L 50G -n home mydell

      lvcreate -L 50G -n root mydell

      lvcreate -L 1G swap mydell



Logische volume vergroten/verkleinen

    mijn home logische volume vergroten met 1 G

      lvextend -L +1G /dev/mapper/mydell-home

    let op een logische volume verkleinen kan leiden tot gegevens verlies indien er te weinig plaats is .... !

lvreduce -L -1G /dev/mapper/mydell-home



toon fysische volume

sudo pvs

    worden getoond : PV fysische volume , VG volume groep , Fmt formaat (normaal = lvm2) , Attr attribuut, Psize groote PV, PFree vtije plaats

      PV             VG       Fmt  Attr PSize      PFree

      /dev/sda6 mydell lvm2   a--  920,68g  500,63g

sudo pvs -a

sudo pvs /dev/sda6



Backup instellingen Logische volumen

    zie bijgeleverde script LVM_bkup



toon volume groep

    sudo vgs

VG       #PV #LV #SN  Attr    VSize     VFree

mydell    1       6       0    wz--n- 920,68g 500,63g



toon logische volume(n)

    sudo lvs

      LV            VG     Attr        LSize   Pool Origin Data% Meta% Move Log Cpy%Sync Convert

      boot       mydell -wi-ao---- 952,00m

      data       mydell -wi-ao---- 100,00g

      home      mydell -wi-ao----  93,13g

      mintroot mydell -wi-a----- 101,00g

      root        mydell -wi-ao----  94,06g

      swap       mydell -wi-ao----  30,93g



hoe een logische volume wegdoen ?

    een logische volume wegdoen kan enkel maar als de fysische volume niet actief is

      dit kan met het vgchange commando

        vgchange -a n mydell

    lvremove /dev//mijn_volumegroup/naam_logische-volume

      voorbeeld :

lvremove /dev/mydell/data





hoe een fysische volume wegdoen ?

vgreduce mydell /dev/sda1




Bijlagen: LVM_bkup (0.8 KLB)




hoe een stick mounten en umounten zonder root te zijn en met je eigen rwx rechten ! [linux blogs franz ulenaers]

Stick mounten zonder root

hoe usb stick mounten en umounten zonder root te zijn en met rwx rechten ?
---------------------------------------------------------------------------------------------------------
(hernoem iedere ulefr01 naar je eigen gebruikersnaam!)

label stick

  • gebruik het 'fatlabel' commando om een volumenaam of label toe te kennen dit als je een vfat bestandensysteem gebruikt op je usb-stick

  • gebruik het commando 'tune2fs' voor een ext2,3,4

    • om een volumenaam stick32GB te maken op je usb_stick doe je met het commando :

sudo tune2fs -L stick32GB /dev/sdc1

noot : gebruik voor /dev/sdc1 hier het juiste device !


maak het filesysteem op je stick clean

  • mogelijk na het mounten zie dmesg messages : Volume was not properly unmounted. Some data may be corrupt. Please run fsck.

    • gebruik de file system consistency check commando fsck om dit recht te zetten

      • doe een umount voordat je het commando fsck uitvoer ! (gebruik het juiste device !)

        • fsck /dev/sdc1

noot: gebruik voor /dev/sdc1 hier je device !


rechten zetten op mappen en bestanden van je stick

  • Steek je stick in een usb poort en umount je stick

sudo chown ulefr01:ulefr01 /media/ulefr01/ -R
  • zet acl op je ext2,3,4 stick (werkt niet op een vfat !)

setfacl -m u:ulefr01:rwx /media/ulefr01
  • met getfact kun je acl zien

getfacl /media/ulefr01
  • met het ls commando kun je het resultaat zien

ls /media/ulefr01 -dla

drwxrwx--- 5 ulefr01 ulefr01 4096 okt 1 18:40 /media/ulefr01

noot: indien de ‘+’ aanwezig is dan is acl reeds aanwezig, zoals op volgende lijn :

drwxrwx---+ 5 ulefr01 ulefr01 4096 okt 1 18:40 /media/ulefr01


Mount stick

  • Steek je stick in een usb poort en kijk of mounten automatisch gebeurd

  • check rechten van bestaande bestanden en mappen op je stick

ls * -la

  • indien root of andere rechten reeds aanwezig , herzetten met volgend commando

sudo chown ulefr01:ulefr01 /media/ulefr01/stick32GB -R

Maak map voor ieder stick

  • cd /media/ulefr01

  • mkdir mmcblk16G stick32GB stick16gb


aanpassen /etc/fstab

  • voeg een lijn toe voor iedere stick

    • voorbeelden

LABEL=mmcblk16G /media/ulefr01/mmcblk16G ext4 user,exec,defaults,noatime,acl,noauto 0 0
LABEL=stick32GB /media/ulefr01/stick32GB ext4 user,exec,defaults,noatime,acl,noauto 0 0
LABEL=stick16gb /media/ulefr01/stick16gb vfat user,defaults,noauto 0 0


Check het volgende

  • het volgende moet nu mogelijk zijn : 

    • mount en umount zonder root te zijn

    •  noot : je kunt de umount niet doen als de mount gedaan is door root ! Indien dat het geval is dan moet je eerst de umount met root ; daarna de mount als gebruiker dan kun je ook de umount doen . 

    • zet een nieuw bestand op je stick zonder root te zijn

    • zet een nieuw map op je stick zonder root te zijn

  • check of je nieuwe bestanden kunt aanmaken zonder root te zijn

        • touch test

        • ls test -la

        • rm test


procedures MyCloud [linux blogs franz ulenaers]

Procedures MyCloud

  • Procedure lftpUlefr01Cloudupload wordt gebruikt om een upload te doen van bestanden en mappen naar MyCloud

  • Procedure lftpUlefr01Cloudmirror wordt gebruikt om wijzigingen terug te halen 


Beide procedures maken gebruik van het programma lftp ( dit is "Sophisticated file transfer program" ) en worden gebruikt om synchronisatie van laptop en desktop toe te laten 


Procedures werden aangepast zodat verborgen bestanden en verborgen mappen ook worden verwerkt ,

alsook werden voor mirror bepaalde meestal onveranderde bestanden en mappen uitgefilterd (--exclude) zodanig dat deze niet opnieuw worden verwerkt

op Cloud blijven ze bestaan als backup maar op de verschillende laptops niet (dit werd gedaan voor oudere mails van 2016 maanden 2016-11 en 2016-12

en voor alle vorige maanden (dit tot en met september) van 2017 !

  • zie bijlagen


Zet acl list [linux blogs franz ulenaers]

setfacl

noot: meestal mogelijk op linux bestandsystemen : btrfs, ext2, ext3, ext4 en Reiserfs  !

  • Hoe een acl zetten voor één gebruiker ?

setfacl -m u:ulefr01:rwx /home/ulefr01

noot: kies ipv ulefr01 hier je eigen gebruikersnaam

  • Hoe een acl afzetten ?

setfacl -x u:ulefr01 /home/ulefr01
  • Hoe een acl zetten voor twee of meer gebruikers ?

setfacl -m u:ulefr01:rwx /home/ulefr01

setfacl -m u:myriam:r-x /home/ulefr01

noot: kies ipv myriam je tweede gebruikersnaam; hier heeft myriam geen w write toegang maar wel r read en x exec !

  • Hoe een lijst opvragen van de ingestelde acl ?

getfacl home/ulefr01
getfacl: Voorafgaande '/' in absolute padnamen worden verwijderd
# file: home/ulefr01
# owner: ulefr01
# group: ulefr01
user::rwx
user:ulefr01:rwx
user:myriam:r-x 
group::---
mask::rwx
other::--- 
  • Hoe het resultaat nakijken ?

getfacl home/ulefr01
 zie hierboven
ls /home/ulefr01 -dla
drwxrwx---+  ulefr01 ulefr01 4096 okt 1 18:40  /home/ulefr01

zie + sign !


python GUI applicatie tune2fs [linux blogs franz ulenaers]

python GUI applicatie tune2fs comando

Created woensdag 18 oktober 2017

geschreven met programmeertaal python met gebruik van Gtk+ 3 

starten in terminal met : sudo python mytune2fs.py

ofwel python source compileren en starten met gecompileerde versie


zie bijlagen :
* pdf
* mytune2fs.py

Python GUI applicatie myarchive.py [linux blogs franz ulenaers]

python GUI applicatie backups maken met fsarchiver

Created vrijdag 13 oktober 2017

GUI applicatie backups maken, achiveerinfo en restore met fsarchiver

zie bijgeleverde bestand : python_GUI_applicatie_backups_maken_met_fsarchiver.pdf


start in terminal mode met : 

* sudo python myarchive.py

* sudo python myarchive2.py

ofwel door gecompileerde versie te maken en de gegeneerde objecten te starten


python myfsck.py [linux blogs franz ulenaers]

python GUI applicatie fsck commando

Created vrijdag 13 oktober 2017

zie bijgeleverd bestand myfsck.py

Deze applicatie kan devices mounten en umounten maar is hoofdzakelijk bedoeld om het fsck comando uit te voeren

Root rechten zijn nodig !

hulp ?

* starten in terminal mode 

* sudo python myfsck.py


Het beste bestandensysteem (meest performant) op een USB stick , hoe opzetten ? [linux blogs franz ulenaers]

het beste bestandensysteem op een USB stick, hoe opzetten ?

het beste bestandensysteem (meest performant) is ext4

  • hoe opzetten ?

mkfs.ext4 $device
  • zet eerst journal af

tune2fs -O ^has_journal $device
  • doe journaling alleen met data_writeback

tune2fs -o journal_data_writeback $device
  • gebruik geen reserved spaces en zet het op nul.

tune2fs -m 0 $device


  • voor bovenstaande 3 acties kan bijgeleverde bash script gebruikt worden :



bestand USBperf

# USBperfext4


echo 'USBperf'

echo '--------'

echo 'ext4 device ?'

read device

echo "device= $device"

echo 'ok ?'

read ok

if [ $ok == ' ' ] || [ $ok == 'n' ] || [ $ok == 'N' ]

then

   echo 'nok - dus stoppen'

   exit 1

fi

echo "doe : no journaling ! tune2fs -O ^has_journal $device"

tune2fs -O ^has_journal $device

echo "use data mode for filesystem as writeback doe : tune2fs -o journal_data $device"

tune2fs -o journal_data_writeback $device

echo "disable reserved space "

tune2fs -m 0 $device

echo 'gedaan !'

read ok

echo "device= $device" 

exit 0


  • pas bestand /etc/fstab aan voor je USB

    • gebruik optie ‘noatime’

Maken dat een bestand niet te wijzigen , niet te hernoemen is niet te deleten is in linux ! [linux blogs franz ulenaers]

Maken dat een bestand niet te wijzigen , niet te hernoemen is niet te deleten is in linux !


bestand .encfs6.xml


hoe : sudo chattr +i /data/Encrypt/.encfs6.xml

je kunt het bestand niet wijzigen, je kunt het bestand niet hernoemen, je kunt het bestand niet deleten zelfs als je root zijt

  • zet attribuut
  • status bekijken
    • lsattr .encfs6.xml
      • ----i--------e-- .encfs6.xml
        • de i betekent immutable
  • om immutable attribuut weg te doen
    • chattr -i .encfs6.xml



Backup laptop [linux blogs franz ulenaers]

laptop heeft een multiboot = windows 7 met encryptie en Linux Mint
backup van mijn laptop , zie http://users.telenet.be/franz.ulenaers/laptopca-new.html

Encryptie [linux blogs franz ulenaers]

Met encryptie kan men de gegevens op je computer beveiligen, door de gegevens onleesbaar maken voor de buitenwereld !

Hoe kan men een bestandssysteem encrypteren ?

installeer de volgende open source pakketten :

    loop-aes-utils en cryptsetup

            apt-get install loop-aes-utils

            apt-get install cryptsetup

        modprobe cryptoloop
        voeg de volgende modules toe in je /etc/modules :
            aes
            dm_mod
           
dm_crypt
           
cryptoloop

Hoe een beveiligd bestandsysteem aanmaken ?

  1. dd if=/dev/zero of=/home/cryptfile bs=1M count=650
hiermee creëer je een bestand van 650 M groot
  1. losetup -e aes /dev/loop0 /home/cryptfile
hierna wordt een paswoord gevraagd van minstens 20 karakters
  1. mkfs.ext3 /dev/loop0
maakt een ext3 bestandssysteem met journaling
  1. mkdir /mnt/crypt
                maakt een lege directory aan
  1. mount /dev/loop0 /mnt/crypt -t ext3
nu hebt je een bestandssysteem onder /mnt/crypt ter beschikking

....

Je kunt automatisch je bestandssysteem beschikbaar maken door een volgende entry in je /etc/fstab :

/home/cryptfile /mnt/crypt ext3 auto,encryption=aes,user,exec 0 0

....

Je kunt je encryptie afzetten dmv.

umount /mnt/crypt


losetup -d /dev/loop0        (dit is niet meer nodig als je de volgende entry in jet /etc/fstab hebt :
                /home/cryptfile /mnt/crypt ext3 auto,encryption=aes,exec 0 0
....
Manueel mounten kun je met :
  • losetup -e aes /dev/loop0 /home/cryptfile
 er wordt gevraagd een paswoord van minstens 20 karakters in te vullen
indien het paswoord verkeerd is dan krijg je de volgende melding :
        mount: wrong fs type, bad option, bad superblock on /dev/loop0,
        or too many mounted file systems
        ..
  • mount /dev/loop0 /mnt/crypt -t ext3
hiermee kunt je het bestandssysteem mounten


Linken in Linux [linux blogs franz ulenaers]

Op Linux kan men bestanden meervoudige benamingen geven, zo kun je een bestand op verschillende plaatsen in de boomstructuur van de bestanden opslaan , zonder extra plaats op harde schijf in te nemen (+-).

Er zijn twee soorten links :

  1. harde links

  2. symbolische links

Een harde link maakt gebruik van hetzelfde bestandsnummer (inode).

Een harde link geldt niet voor een directory !

Een harde link moet op zelfde bestandssysteem en oorspronkelijk bestand moet bestaan !

Een symbolische link , het bestand krijgt een nieuw bestandsnummer , het bestand waarop verwezen wordt hoeft niet te bestaan.

Een symbolische link gaat ook voor een directory.

bash-shell gebruiker ulefr01

pwd
/home/ulefr01/cgcles/linux
ls linuxcursus.odt -ila
293800 -rw-r--r-- 1 ulefr01 ulefr01 4251348 2005-12-17 21:11 linuxcursus.odt

Het bestand linuxcursus is 4,2M groot, inode nr 293800.

bash-shell gebruiker tom

pwd
/home/tom
ln /home/ulefr01/cgcles/linux/linuxcursus.odt cursuslinux.odt
tom@franz3:~ $ ls cursuslinux.odt -il
293800 -rw-r--r-- 2 ulefr01 ulefr01 4251348 2005-12-17 21:11 cursuslinux.odt
geen extra plaats van 4,2M, zelfde inode nr 293800 !

bash-shell gebruiker root

pwd
/root
root@franz3:~ # ln /home/ulefr01/cgcles/linux/linuxcursus.odt linuxcursus.odt
root@franz3:~ # ls -il linux*
293800 -rw-rw-r-- 3 ulefr01 ulefr01 4251300 2005-12-17 21:31 linuxcursus.odt
geen extra plaats van 4,2M, zelfde inode nr 293800 !

bash-shell gebruiker ulefr01, symbolische link

ln -s cgcles/linux/linuxcursus.odt linuxcursus.odt
ulefr01@franz3:~ $ ls -il linuxcursus.odt
1191741 lrwxrwxrwx 1 ulefr01 ulefr01 28 2005-12-17 21:42 linuxcursus.odt -> cgcles/linux/linuxcursus.odt
slechts 28 bytes

ln -s linuxcursus.odt test.odt
1191898 lrwxrwxrwx 1 ulefr01 ulefr01 15 2005-12-17 22:00 test.odt -> linuxcursus.odt
slechts 15 bytes

rm linuxcursus.odt
ulefr01@franz3:~ $ ls *.odt -il
1193723 -rw-r--r-- 1 ulefr01 ulefr01 27521 2005-11-23 20:11 Backup&restore.odt
1193942 -rw-r--r-- 1 ulefr01 ulefr01 13535 2005-11-26 16:11 doc.odt
1191933 -rw------- 1 ulefr01 ulefr01 6135 2005-12-06 12:00 fru.odt
1193753 -rw-r--r-- 1 ulefr01 ulefr01 19865 2005-11-23 22:44 harddiskdata.odt
1193576 -rw-r--r-- 1 ulefr01 ulefr01 7198 2005-11-26 21:46 ooo-1.odt
1191749 -rw------- 1 ulefr01 ulefr01 22542 2005-12-06 16:16 Regen.odt
1191898 lrwxrwxrwx 1 ulefr01 ulefr01 15 2005-12-17 22:00 test.odt -> linuxcursus.odt
test.odt verwijst naar een bestand dat niet bestaat !

18-02-2020

21:55

Samsung Galaxy Z Flip, S20(+) en S20 Ultra Hands-on [Laatste Artikelen - Webwereld]

Samsung nodigde ons uit op de drie allernieuwste smartphones van dichtbij te bekijken. Daar maakten wij dankbaar gebruik van en wij delen onze bevindingen met je.

02-02-2020

21:29

Hands-on: Synology Virtual Machine Manager [Laatste Artikelen - Webwereld]

Dat je NAS tegenwoordig voor veel meer dan alleen het opslaan van bestanden kan worden gebruikt is inmiddels bekend, maar wist je ook dat je er virtuele machines mee kan beheren? Wij leggen je uit hoe.

23-01-2020

16:42

Wat je moet weten over FIDO-sleutels [Laatste Artikelen - Webwereld]

Dankzij de FIDO2-standaard is het mogelijk om zonder wachtwoord toch veilig in te loggen bij diverse online diensten. Onder meer Microsoft en Google bieden hier al opties voor. Dit jaar volgen er waarschijnlijk meer organisaties die dit aanbieden.

Zo gebruik je je iPhone zonder Apple ID [Laatste Artikelen - Webwereld]

Tegenwoordig moet je voor zo’n beetje alles wat je online wilt doen een account aanmaken, zelfs als je niet van plan bent online te werken of als je gewoon geen zin hebt om je gegevens te delen met de fabrikant. Wij laten je vandaag zien hoe je dat voor elkaar krijgt met je iPhone of iPad.

Groot lek in Internet Explorer wordt al misbruikt in het wild [Laatste Artikelen - Webwereld]

Er is een nieuwe zero-day-kwetsbaarheid ontdekt in Microsoft Internet Explorer. Het nieuwe lek wordt al misbruikt en een beveiligingsupdate is nog niet beschikbaar.

Zo installeer je Chrome-extensies in de nieuwe Edge [Laatste Artikelen - Webwereld]

De nieuwe versie van Edge is gebouwd met code van het Chromium-project, maar in de standaardconfiguratie worden extensies uitsluitend geïnstalleerd via de Microsoft Store. Dat is gelukkig vrij eenvoudig aan te passen.

19-01-2020

12:59

Windows 10-upgrade nog steeds gratis [Laatste Artikelen - Webwereld]

Microsoft gaf gebruikers enkele jaren geleden de mogelijkheid gratis te upgraden van Windows 7 naar Windows 10. Daarbij ging het af en toe zo ver dat zelfs gebruikers die dat niet wilden een upgrade kregen. De aanbieding is al lang en breed voorbij, maar gratis upgraden is nog steeds mogelijk en het is nu makkelijker dan ooit. Wij vertellen je hoe je dat doet.

Chrome, Edge, Firefox: Welke browser is het snelst? [Laatste Artikelen - Webwereld]

Er is veel veranderd op de markt voor pc-browsers. Ongeveer vijf jaar geleden was er nog meer concurrentie en geheel eigen ontwikkeling, nu zijn er nog maar twee engines over: die achter Chrome en die achter Firefox. Met de release van de Blink-gebaseerde Edge van Microsoft deze maand kijken we naar benachmarks en praktijktests.

Cooler Master herontwerpt koelpasta-tubes wegens drugsverdenkingen [Laatste Artikelen - Webwereld]

Cooler Master heeft het uiterlijk van z’n koelpasta-spuiten aangepast omdat het bedrijf het naar eigen zeggen beu is om steeds te moeten uitleggen aan ouders dat de inhoud geen drugs is, maar koelpasta.

06-03-2018

19-09-2017

10:33

Embedded Linux Engineer [Job Openings]

You're eager to work with Linux in an exciting environment. You have a lot of PC equipement experience. Prior experience with embedded Linux or small footprint distributions is considered a plus. Region East/West Flanders

Linux Teacher [Job Openings]

We're looking for someone capable of teaching Linux and/or Solaris professionally. Ideally the candidate has experience with teaching in Linux, possibly other non-Windows OSes as well.

Kernel Developer [Job Openings]

We're looking for someone with kernel device driver developement experience. Preferably, but not necessary with knowledge of AV or TV devices.

C/C++ Developers [Job Openings]

We're searching Linux C/C++ Developers. Region Leuven.

Feeds

FeedRSSLast fetchedNext fetched after
Computable XML 17-05-2022, 17:27 17-05-2022, 20:27
GNOMON XML 17-05-2022, 17:27 17-05-2022, 20:27
http://www.h-online.com/news/atom.xml XML 17-05-2022, 17:27 17-05-2022, 20:27
http://www.h-online.com/open/atom.xml XML 17-05-2022, 17:27 17-05-2022, 20:27
Job Openings XML 17-05-2022, 17:27 17-05-2022, 20:27
Laatste Artikelen - Webwereld XML 17-05-2022, 17:27 17-05-2022, 20:27
linux blogs franz ulenaers XML 17-05-2022, 17:27 17-05-2022, 20:27
Linux Journal - The Original Magazine of the Linux Community XML 17-05-2022, 17:27 17-05-2022, 20:27
Linux Today XML 17-05-2022, 17:27 17-05-2022, 20:27
OMG! Ubuntu! XML 17-05-2022, 17:27 17-05-2022, 20:27
Planet Python XML 17-05-2022, 17:27 17-05-2022, 20:27
Press Releases Archives - The Document Foundation Blog XML 17-05-2022, 17:27 17-05-2022, 20:27
Simple is Better Than Complex XML 17-05-2022, 17:27 17-05-2022, 20:27
Slashdot: Linux XML 17-05-2022, 17:27 17-05-2022, 20:27
Tech Drive-in XML 17-05-2022, 17:27 17-05-2022, 20:27
ulefr01 - blog franz ulenaers XML 17-05-2022, 17:27 17-05-2022, 20:27

Laatst gewijzigd: dinsdag 17 mei 2022 20:27
Copyright � 2021 - Franz Ulenaers (email : franz.ulenaers@telenet.be)