19-01-2021

18:43

Exxellence in concurrentieslag met Centric en Roxit [Computable]

De Exxellence Groep uit Hengelo die software levert aan de overheid, heeft zich opnieuw versterkt. Met steun van Main Capital wordt Tercera uit Kampen overgenomen, gespecialiseerd in software voor de digitalisering van de ruimtelijke planning bij...

Infor-partner Avaap meldt zich in Amsterdam [Computable]

Avaap, een van de grotere Infor-systeemintegrators, heeft een Benelux-vestiging in Amsterdam geopend. Dit om internationaal opererende klanten beter te kunnen ondersteunen in Europa. Naast Infor-producten is het Amerikaanse bedrijf gespecialiseerd in software van onder meer Epic,...

Ultieme poging om Sanderink halt toe te roepen [Computable]

Brigitte van Egten, de ex-vriendin van Centric-eigenaar Gerard Sanderink, doet vandaag een uiterste poging de Twentse it-ondernemer te laten stoppen met het verder beschadigen van haar reputatie. De hele decembermaand heeft Sanderink, die volgens advocaat Paul...

Krita 4.4.2 Released with New Tools, Brushes, and Halftone Filter [OMG! Ubuntu!]

krita 4.4.2 screenshotKrita 4.4.2 has been released. In this post we recap the new features (like mesh gradients and improved halftone filter) plus link to the Linux download.

This post, Krita 4.4.2 Released with New Tools, Brushes, and Halftone Filter is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

How to Install Spotify on Ubuntu & Linux Mint [OMG! Ubuntu!]

how to install Spotify on ubuntu guideLearn how to install Spotify on Ubuntu & Linux Mint using the Snap app or add the Spotify repository to install the Spotify desktop player for Linux.

This post, How to Install Spotify on Ubuntu & Linux Mint is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

Formatting tricks for the Linux date command [Linuxtoday.com]

The Linux date command is simple, yet powerful. This article shows you how to unleash the power of the date command.

Stack Abuse: Python: Catch Multiple Exceptions in One Line [Planet Python]

Introduction

In this article we're going to be taking a look at the try/except clause, and specifically how you can catch multiple exceptions in a single line, as well as how to use the suppress() method.

Both of these techniques will help you in writing more accessible and versatile code that adheres to DRY (don't repeat yourself) principles.

Let's start by looking at the problem:

try:
    do_the_thing()
except TypeError as e:
    do_the_other_thing()
except KeyError as e:
    do_the_other_thing()
except IndexError as e:
    do_the_other_thing()

Brutal.

As we can see, this is very WET code, we repeat the same invocation multiple times. Practices like this can make our code's reading and refactoring a living nightmare.

Rather than writing exceptions one after another, wouldn't it be better to group all of these exception handlers into a single line?

Multiple Exceptions

If you're just here for a quick answer, it's simple: use a tuple.

All the errors contained in the exception line tuple will be evaluated together:

try:
    do_the_thing()
except (TypeError, KeyError, IndexError) as e:
    do_the_other_thing()

Easy, right?

Avoiding Bad Practices

"Errors should never pass silently." - The Zen of Python.

try/except clauses are probably the most misused pattern in Python.

Used improperly, they end up being the cliché of drunks and lampposts, being used only when the Python interpreter start caroling the "12 Errors of Christmas".

It's very tempting to just put a try and a bare exception on a problem to "make it go away". By doing that, we're effectively sweeping the exceptions under the rug, which is a shame, especially since they can be wonderfully helpful in recovering from potentially fatal errors, or shining a light on hidden bugs.

That's why when using except clauses you should always be sure to specify the errors you know you could encounter, and exclude the ones you don't.

Letting your program fail is okay, even preferred, to just pretending the problem doesn't exist.

"Errors should never pass silently... unless explicitly silenced."

However, once in a blue moon when you do get the opportunity to ignore exception handling, you can use suppress():

from contextlib import suppress

with suppress(TypeError, KeyError, IndexError):
    do_the_thing()

The suppress() method takes a number of exceptions as its argument, and performs a try/except/pass with those errors. As you can see it also let's you write multiple exceptions in a single line.

This let's you avoid writing a try/except/pass manually:

try:
    do_the_thing()
except (TypeError, KeyError, IndexError) as e:
    pass

Better yet, it's also standard in any version of Python 3.4 and above!

Conclusion

In this article, we've covered how to handle multiple exceptions in a single line. We've also briefly gone over some bad practices of ignoring exceptions, and used the supress() function to supress exceptions explicitly.

Django Weblog: Django 3.2 alpha 1 released [Planet Python]

Django 3.2 alpha 1 is now available. It represents the first stage in the 3.2 release cycle and is an opportunity for you to try out the changes coming in Django 3.2.

Django 3.2 has a mezcla of new features which you can read about in the in-development 3.2 release notes.

This alpha milestone marks the feature freeze. The current release schedule calls for a beta release in about a month and a release candidate about a month from then. We'll only be able to keep this schedule if we get early and often testing from the community. Updates on the release schedule are available on the django-developers mailing list.

As with all alpha and beta packages, this is not for production use. But if you'd like to take some of the new features for a spin, or to help find and fix bugs (which should be reported to the issue tracker), you can grab a copy of the alpha package from our downloads page or on PyPI.

The PGP key ID used for this release is Carlton Gibson: E17DF5C82B4F9D00

Full Stack Python: How to Transcribe Speech Recordings into Text with Python [Planet Python]

When you have a recording where one or more people are talking, it's useful to have a highly accurate and automated way to extract the spoken words into text. Once you have the text, you can use it for further analysis or as an accessibility feature.

In this tutorial, we'll use a high accuracy speech-to-text web application programming interface called AssemblyAI to extract text from an MP3 recording (many other formats are supported as well).

With the code from this tutorial, you will be able to take an audio file that contains speech such as this example one I recorded and output a highly accurate text transcription like this:

An object relational mapper is a code library that automates the transfer of 
data stored in relational, databases into objects that are more commonly used
in application code or EMS are useful because they provide a high level 
abstraction upon a relational database that allows developers to write Python 
code instead of sequel to create read update and delete, data and schemas in 
their database. Developers can use the programming language. They are 
comfortable with to work with a database instead of writing SQL...

(the text goes on from here but I abbreviated it at this point)

Tutorial requirements

Throughout this tutorial we are going to use the following dependencies, which we will install in just a moment. Make sure you also have Python 3, preferably 3.6 or newer installed, in your environment:

We will use the following dependencies to complete this tutorial:

All code in this blog post is available open source under the MIT license on GitHub under the transcribe-speech-text-script directory of the blog-code-examples repository. Use the source code as you desire for your own projects.

Setting up the development environment

Change into the directory where you keep your Python virtual environments. I keep mine in a subdirectory named venvs within my user's home directory. Create a new virtualenv for this project using the following command.

python3 -m venv ~/venvs/pytranscribe

Activate the virtualenv with the activate shell script:

source ~/venvs/pytranscribe/bin/activate

After the above command is executed, the command prompt will change so that the name of the virtualenv is prepended to the original command prompt format, so if your prompt is simply $, it will now look like the following:

(pytranscribe) $

Remember, you have to activate your virtualenv in every new terminal window where you want to use dependencies in the virtualenv.

We can now install the requests package into the activated but otherwise empty virtualenv.

pip install requests==2.24.0

Look for output similar to the following to confirm the appropriate packages were installed correctly from PyPI.

(pytranscribe) $ pip install requests==2.24.0
Collecting requests==2.24.0
  Using cached https://files.pythonhosted.org/packages/45/1e/0c169c6a5381e241ba7404532c16a21d86ab872c9bed8bdcd4c423954103/requests-2.24.0-py2.py3-none-any.whl
Collecting certifi>=2017.4.17 (from requests==2.24.0)
  Using cached https://files.pythonhosted.org/packages/5e/c4/6c4fe722df5343c33226f0b4e0bb042e4dc13483228b4718baf286f86d87/certifi-2020.6.20-py2.py3-none-any.whl
Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 (from requests==2.24.0)
  Using cached https://files.pythonhosted.org/packages/9f/f0/a391d1463ebb1b233795cabfc0ef38d3db4442339de68f847026199e69d7/urllib3-1.25.10-py2.py3-none-any.whl
Collecting chardet<4,>=3.0.2 (from requests==2.24.0)
  Using cached https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl
Collecting idna<3,>=2.5 (from requests==2.24.0)
  Using cached https://files.pythonhosted.org/packages/a2/38/928ddce2273eaa564f6f50de919327bf3a00f091b5baba8dfa9460f3a8a8/idna-2.10-py2.py3-none-any.whl
Installing collected packages: certifi, urllib3, chardet, idna, requests
Successfully installed certifi-2020.6.20 chardet-3.0.4 idna-2.10 requests-2.24.0 urllib3-1.25.10

We have all of our required dependencies installed so we can get started coding the application.

Uploading, initiating and transcribing audio

We have everything we need to start building our application that will transcribe audio into text. We're going to build this application in three files:

  1. upload_audio_file.py: uploads your audio file to a secure place on AssemblyAI's service so it can be access for processing. If your audio file is already accessible with a public URL, you don't need to do this step, you can just follow this quickstart
  2. initiate_transcription.py: tells the API which file to transcribe and to start immediately
  3. get_transcription.py: prints the status of the transcription if it is still processing, or displays the results of the transcription when the process is complete

Create a new directory named pytranscribe to store these files as we write them. Then change into the new project directory.

mkdir pytranscribe
cd pytranscribe

We also need to export our AssemblyAI API key as an environment variable. Sign up for an AssemblyAI account and log in to the AssemblyAI dashboard, then copy "Your API token" as shown in this screenshot:

AssemblyAI dashboard.

export ASSEMBLYAI_KEY=your-api-key-here

Note that you must use the export command in every command line window that you want this key to be accessible. The scripts we are writing will not be able to access the API if you do not have the token exported as ASSEMBLYAI_KEY in the environment you are running the script.

Now that we have our project directory created and the API key set as an environment variable, let's move on to writing the code for the first file that will upload audio files to the AssemblyAI service.

Uploading the audio file for transcription

Create a new file named upload_audio_file.py and place the following code in it:

import argparse
import os
import requests


API_URL = "https://api.assemblyai.com/v2/"


def upload_file_to_api(filename):
    """Checks for a valid file and then uploads it to AssemblyAI
    so it can be saved to a secure URL that only that service can access.
    When the upload is complete we can then initiate the transcription
    API call.
    Returns the API JSON if successful, or None if file does not exist.
    """
    if not os.path.exists(filename):
        return None

    def read_file(filename, chunk_size=5242880):
        with open(filename, 'rb') as _file:
            while True:
                data = _file.read(chunk_size)
                if not data:
                    break
                yield data

    headers = {'authorization': os.getenv("ASSEMBLYAI_KEY")}
    response = requests.post("".join([API_URL, "upload"]), headers=headers,
                             data=read_file(filename))
    return response.json()

The above code imports the argparse, os and requests packages so that we can use them in this script. The API_URL is a constant that has the base URL of the AssemblyAI service. We define the upload_file_to_api function with a single argument, filename that should be a string with the absolute path to a file and its filename.

Within the function, we check that the file exists, then use Request's chunked transfer encoding to stream large files to the AssemblyAI API.

The os module's getenv function reads the API that was set on the command line using the export command with the getenv. Make sure that you use that export command in the terminal where you are running this script otherwise that ASSEMBLYAI_KEY value will be blank. When in doubt, use echo $ASSEMBLY_AI to see if the value matches your API key.

To use the upload_file_to_api function, append the following lines of code in the upload_audio_file.py file so that we can properly execute this code as a script called with the python command:

if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("filename")
    args = parser.parse_args()
    upload_filename = args.filename
    response_json = upload_file_to_api(upload_filename)
    if not response_json:
        print("file does not exist")
    else:
        print("File uploaded to URL: {}".format(response_json['upload_url']))

The code above creates an ArgumentParser object that allows the application to obtain a single argument from the command line to specify the file we want to access, read and upload to the AssmeblyAI service.

If the file does not exist, the script will print a message that the file couldn't be found. In the happy path where we do find the correct file at that path, then the file is uploaded using the code in upload_file_to_api function.

Execute the completed upload_audio_file.py script by running it on the command line with the python command. Replace FULL_PATH_TO_FILE with an absolute path to the file you want to upload, such as /Users/matt/devel/audio.mp3.

python upload_audio_file.py FULL_PATH_TO_FILE

Assuming the file is found at the location that you specified, when the script finishes uploading the file, it will print a message like this one with a unique URL:

File uploaded to URL: https://cdn.assemblyai.com/upload/463ce27f-0922-4ea9-9ce4-3353d84b5638

This URL is not public, it can only be used by the AssemblyAI service, so no one else will be able to access your file and its contents except for you and their transcription API.

The part that is important is the last section of the URL, in this example it is 463ce27f-0922-4ea9-9ce4-3353d84b5638. Save that unique identifier because we need to pass it into the next script that initiates the transcription service.

Initiate transcription

Next, we'll write some code to kick off the transcription. Create a new file named initiate_transcription.py. Add the following code to the new file.

import argparse
import os
import requests


API_URL = "https://api.assemblyai.com/v2/"
CDN_URL = "https://cdn.assemblyai.com/"


def initiate_transcription(file_id):
    """Sends a request to the API to transcribe a specific
    file that was previously uploaded to the API. This will
    not immediately return the transcription because it takes
    a moment for the service to analyze and perform the
    transcription, so there is a different function to retrieve
    the results.
    """
    endpoint = "".join([API_URL, "transcript"])
    json = {"audio_url": "".join([CDN_URL, "upload/{}".format(file_id)])}
    headers = {
        "authorization": os.getenv("ASSEMBLYAI_KEY"),
        "content-type": "application/json"
    }
    response = requests.post(endpoint, json=json, headers=headers)
    return response.json()

We have the same imports as the previous script and we've added a new constant, CDN_URL that matches the separate URL where AssemblyAI stores the uploaded audio files.

The initiate_transcription function essentially just sets up a single HTTP request to the AssemblyAI API to start the transcription process on the audio file at the specific URL passed in. This is why passing in the file_id is important: that completes the URL of the audio file that we are telling AssemblyAI to retrieve.

Finish the file by appending this code so that it can be easily invoked from the command line with arguments.

if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("file_id")
    args = parser.parse_args()
    file_id = args.file_id
    response_json = initiate_transcription(file_id)
    print(response_json)

Start the script by running the python command on the initiate_transcription file and pass in the unique file identifier you saved from the previous step.

# the FILE_IDENTIFIER is returned in the previous step and will
# look something like this: 463ce27f-0922-4ea9-9ce4-3353d84b5638
python initiate_transcription.py FILE_IDENTIFIER

The API will send back a JSON response that this script prints to the command line.

{'audio_end_at': None, 'acoustic_model': 'assemblyai_default', 'text': None, 
 'audio_url': 'https://cdn.assemblyai.com/upload/463ce27f-0922-4ea9-9ce4-3353d84b5638', 
 'speed_boost': False, 'language_model': 'assemblyai_default', 'redact_pii': False, 
 'confidence': None, 'webhook_status_code': None, 
 'id': 'gkuu2krb1-8c7f-4fe3-bb69-6b14a2cac067', 'status': 'queued', 'boost_param': None, 
 'words': None, 'format_text': True, 'webhook_url': None, 'punctuate': True, 
 'utterances': None, 'audio_duration': None, 'auto_highlights': False, 
 'word_boost': [], 'dual_channel': None, 'audio_start_from': None}

Take note of the value of the id key in the JSON response. This is the transcription identifier we need to use to retrieve the transcription result. In this example, it is gkuu2krb1-8c7f-4fe3-bb69-6b14a2cac067. Copy the transcription identifier in your own response because we will need it to check when the transcription process has completed in the next step.

Retrieving the transcription result

We have uploaded and begun the transcription process, so let's get the result as soon as it is ready.

How long it takes to get the results back can depend on the size of the file, so this next script will send an HTTP request to the API and report back the status of the transcription, or print the output if it's complete.

Create a third Python file named get_transcription.py and put the following code into it.

import argparse
import os
import requests


API_URL = "https://api.assemblyai.com/v2/"


def get_transcription(transcription_id):
    """Requests the transcription from the API and returns the JSON
    response."""
    endpoint = "".join([API_URL, "transcript/{}".format(transcription_id)])
    headers = {"authorization": os.getenv('ASSEMBLYAI_KEY')}
    response = requests.get(endpoint, headers=headers)
    return response.json()


if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("transcription_id")
    args = parser.parse_args()
    transcription_id = args.transcription_id
    response_json = get_transcription(transcription_id)
    if response_json['status'] == "completed":
        for word in response_json['words']:
            print(word['text'], end=" ")
    else:
        print("current status of transcription request: {}".format(
              response_json['status']))

The code above has the same imports as the other scripts. In this new get_transcription function, we simply call the AssemblyAI API with our API key and the transcription identifier from the previous step (not the file identifier). We retrieve the JSON response and return it.

In the main function we handle the transcription identifier that is passed in as a command line argument and pass it into the get_transcription function. If the response JSON from the get_transcription function contains a completed status then we print the results of the transcription. Otherwise, print the current status which is either queued or processing before it is completed.

Call the script using the command line and the transcription identifier from the previous section:

python get_transcription.py TRANSCRIPTION_ID

If the service has not yet started working on the transcript then it will return queued like this:

current status of transcription request: queued

When the service is currently working on the audio file it will return processing:

current status of transcription request: processing

When the process is completed, our script will return the text of the transcription, like you see here:

An object relational mapper is a code library that automates the transfer of 
data stored in relational, databases into objects that are more commonly used
in application code or EMS are useful because they provide a high level 

...(output abbreviated)

That's it, we've got our transcription!

You may be wondering what to do if the accuracy isn't where you need it to be for your situation. That is where boosting accuracy for keywords or phrases and selecting a model that better matches your data come in. You can use either of those two methods to boost the accuracy of your recordings to an acceptable level for your situation.

What's next?

We just finished writing some scripts that call the AssemblyAI API to transcribe recordings with speech into text output.

Next, take a look at some of their more advanced documentation that goes beyond the basics in this tutorial:

Questions? Let me know via an issue ticket on the Full Stack Python repository, on Twitter @fullstackpython or @mattmakai. See something wrong with this post? Fork this page's source on GitHub and submit a pull request.

14:55

Ransomware Task Force krijgt tractie [Computable]

De Amerikaanse Ransomware Task Force trekt steeds meer leden aan. Naast grote namen uit de technologiewereld, vinden ook meer en meer gespecialiseerde bedrijven de weg naar de anti-ransomware-organisatie. Databeveiliger Datto is het nieuwste lid.

Grote verschillen in mbo-opleidingen ict [Computable]

De mbo-opleidingen tot medewerker ict-support scoren heel verschillend. De prestaties van de studenten lopen per opleiding behoorlijk uiteen. Ook komt het ene college er in hun oordeel veel beter van af dan het andere. Opvallend is...

Vanenburg verkoopt Educator aan Breens [Computable]

Vanenburg, de investeringsgroep van de familie Jan Baan uit Putten, heeft softwareontwikkelaar Educator verkocht aan Breens Network. Deze dochter paste niet goed bij Vanenburgs hoofdactiviteit bedrijfssoftware en dan met name erp-optimalisaties.

Almere broedt op ict-broedplaats [Computable]

Almere wil in mei dit jaar een ICT Field Lab openen met als doel onafhankelijke om- en bijscholings-initiatieven tot ict’er te huisvesten. Ook kunnen mkb’ers en zzp’ers hun ict-vraagstuk laten oplossen door de ict-studenten van de...

Tableau schoolt gratis in datavaardigheden [Computable]

Tableau Software biedt via Data Literacy for All zeven gratis cursussen aan op het gebied van datavaardigheden. Het online-lesprogramma bestaat uit verschillende competentieniveaus en moet de bestaande datavaardigheden van cursisten aanscherpen of toegang bieden tot nieuwe...

Snel herstel ict-vacaturemarkt; salaris stagneert [Computable]

Na een coronadip in het tweede kwartaal van 2020 heeft de vacaturemarkt voor hoogopgeleide it&#39;ers zich in hoog tempo herstelt. Grote bedrijven en organisaties in de publieke sector grepen de Covid-19-crisis aan om it-experts in vaste...

How to Install Microsoft Edge on Ubuntu & Linux Mint [OMG! Ubuntu!]

Install Edge on Ubuntu Linux GraphicLearn how to install Microsoft Edge on Ubuntu, Linux Mint, and related distributions. It's easy, and only requires a couple of quick steps, so read on…

This post, How to Install Microsoft Edge on Ubuntu & Linux Mint is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

Linux Release Roundup: Kdenlive, BleachBit & LibreOffice [OMG! Ubuntu!]

apps on a shelfWe roundup a crop of recent Linux releases, including system cleaner BleachBit, open source video editor Kdenlive, and the phenomenally popular LibreOffice.

This post, Linux Release Roundup: Kdenlive, BleachBit & LibreOffice is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

Podcast.__init__: Driving Toward A Faster Python Interpreter With Pyston [Planet Python]

One of the common complaints about Python is that it is slow. There are languages and runtimes that can execute code faster, but they are not as easy to be productive with, so many people are willing to make that tradeoff. There are some use cases, however, that truly need the benefit of faster execution. To address this problem Kevin Modzelewski helped to create the Pyston intepreter that is focused on speeding up unmodified Python code. In this episode he shares the history of the project, discusses his current efforts to optimize a fork of the CPython interpreter, and his goals for building a business to support the ongoing work to make Python faster for everyone. This is an interesting look at the opportunities that exist in the Python ecosystem and the work being done to address some of them.

Summary

One of the common complaints about Python is that it is slow. There are languages and runtimes that can execute code faster, but they are not as easy to be productive with, so many people are willing to make that tradeoff. There are some use cases, however, that truly need the benefit of faster execution. To address this problem Kevin Modzelewski helped to create the Pyston intepreter that is focused on speeding up unmodified Python code. In this episode he shares the history of the project, discusses his current efforts to optimize a fork of the CPython interpreter, and his goals for building a business to support the ongoing work to make Python faster for everyone. This is an interesting look at the opportunities that exist in the Python ecosystem and the work being done to address some of them.

Announcements

  • Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.
  • When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With the launch of their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
  • Your host as usual is Tobias Macey and today I’m interviewing Kevin Modzelewski about his work on Pyston, an interpreter for Python focused on compatibility and speed.

Interview

  • Introductions
  • How did you get introduced to Python?
  • Can you start by describing what Pyston is and how it got started?
  • Can you share some of the history of the project and the recent changes?
    • What is your motivation for focusing on Pyston and Python optimization?
  • What are the use cases that you are primarily focused on with Pyston?
  • Why do you think Python needs another performance project?
  • Can you describe the technical implementation of Pyston?
    • How has the project evolved since you first began working on it?
  • What are the biggest challenges that you face in maintaining compatibility with CPython?
  • How does the approach to Pyston compare to projects like PyPy and Pyjion?
  • How are you approaching sustainability and governance of the project?
  • What are some of the most interesting, innovative, or unexpected uses for Pyston that you have seen?
  • What have you found to be the most interesting, unexpected, or challenging lessons that you have learned while working on Pyston?
  • When is Pyston the wrong choice?
  • What do you have planned for the future of the project?

Keep In Touch

Picks

Links

The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA

Python Pool: CV2 Normalize() in Python Explained With Examples [Planet Python]

Hello geeks and welcome in this article, we will cover cv2 normalize(). Along with that, we will also look at its syntax for an overall better understanding. Then we will see the application of all the theory part through a couple of examples. The cv2 is a cross-platform library designed to solve all computer vision-related problems. We will look at its application and work later in this article. But first, let us look at the definition of the function.

In general, normalization means repeating data repetition and eliminate unwanted characteristics. So, Image normalization can be understood as to how we change an image’s pixel intensity. Often it is linked with increasing contrast, which helps in better image segmentation.

Syntax

cv.normalize(img,  norm_img)

This is the general syntax of our function. Here the term “img” represents the image file to be normalized. “Norm_img” represents the user’s condition to be implemented on the image. As we move ahead in this article, we will develop a better understanding of this function.

How Cv2 Normalize works?

We have discussed the definition and general syntax of Cv2 Normalize. In this section, we will try to get a brief idea about how it works. With the help of this, we can remove noise from an image. We bring the image in a range of intensity values, which makes the image less stressful and more normal to our senses. Primarily it does the job of making the subject image a bit clearer. It does so with the help of several parameters that we will discuss in detail in the next section.

Application of Cv2 Normalize

In this section, we will see what difference the cv2 Normalize code makes. To achieve this, we will first use the Cv2 imshow to display an image, after which we will use the normalize function and compare the 2 images to spot the difference.

import cv2

img = cv2.imread('3.jpeg',1)
cv2.imshow("sample",img)
cv2.waitKey(5000)

Output:

CV2 NORMALIZE

Here we have successfully used the imshow() function to print our image. As I have already covered the imshow() function, I will not go in-depth about it here. Our picture is not very clear, and its overall appearance can be improved considerably. Now let use our function and see the difference.

import cv2 as cv
import numpy as ppool
img = cv.imread("3.jpeg")
norm = ppool.zeros((800,800))
final = cv.normalize(img,  norm, 0, 255, cv.NORM_MINMAX)
cv.imshow('Normalized Image', final)
cv.imwrite('city_normalized.jpg', final)
cv.waitKey(5000)

Output:

normalized image

See what our function does; the change is quite evident. When you compare it with the previous one, you can notice that it is far clearer and has better contrast.

Now let us try to decode and understand the code that helped us achieve it. Here at first, we have imported cv2. After which, we have imported the NumPy module. Then we have used the imread() function to read our image. After that, we have used the numpy function zeros, which gives a new array of 800*800. Then we have used the cv normalized syntax. Here 1st we have our image name, second normalization condition. Then we have 255, which is the upper limit of our array, which means values beyond that will not be stored in it. Then, at last, we have used cv.NORM_MINMAX, in this case, the lower value is alpha, and the higher value is beta, so the function works between them.

How to get the original image back?

Using the normalized function creates a separate new file for the subject image. Our original image remains unchanged, and hence to obtain it, we can use the imshow() function.

Conclusion

In this article, we covered the Cv2 normalize(). We looked at its syntax and example. We tried to understand what difference this function can make to your image through example. As in our case, by applying this, we were able to achieve a much clearer picture. In the end, we can conclude that cv2 normalize() helps us by changing the pixel intensity and increasing the overall contrast.

I hope this article was able to clear all doubts. But in case you have any unsolved queries feel free to write them below in the comment section.

The post CV2 Normalize() in Python Explained With Examples appeared first on Python Pool.

Python Pool: What is Python Syslog? Explained with Different methods [Planet Python]

Hello geeks and welcome in today’s article, we will cover Python Syslog. Along with that, for proper understanding, we will look at different methods and also some sample code. Before moving that ahead let us first understand Syslog through its definition. The module provides an interface to the Unix Syslog library. Here Unix is an OS developed for multiuser and multitasking computers. A module named SysLogHandler is available in logging.handlers, a pure python library that can speak to the Syslog server.

Different Methods for Python Syslog

1. SYSLOG.SYLOG(MESSAGE,PRIORITY)

This function sends a string message to the system logger. Here logger keeps track of events when the software runs. Here the priority argument is optional and defaults to log_info, and it determines the priority.

2. SYSLOG.OPENLOG

This is used as an subsequent of syslog call. It takes an ident argument which of string type.

3. SYSLOG.CLOSELOG

This method is used for the purpose of resetting the syslog module.

4. SYSLOG.SETLOGMASK

This method is used to set up the priority mask to maskpri, It returns the previous mask value. When there is no priority maskpri is ignored.

Sample Code Covering Various Syslog Methods

In this section we will look at some sample codes in which we have will use the various methods discussed in the above section.

import syslog
import sys

syslog.openlog(sys.argv[0])

syslog.syslog(syslog.LOG_NOTICE, "notice-1")
syslog.syslog(syslog.LOG_NOTICE, "notice-2")

syslog.closelog()

Here above we can see the sample code. Here at first, we have imported the Syslog module. Along with that we have imported the sys which means system-specific parameters. Next, we have used openlog with a command sys. argv[0]. This command is a list that contains the command-line argument passed to the script. Next, we have a Syslog method and closed with a close log command.

SysLogHandler

As discussed at the start of the article it is a method available in logging. handlers. This particular method supports sending a message to a remote or Unix Syslog. Let us look at an example in order for a better understanding.

import logging
import logging.handlers
import sys

logger = logging.getLogger()
logger.setLevel(logging.INFO)
syslog = logging.handlers.SysLogHandler(address=("localhost", 8000))
logger.addHandler(syslog)
print (logger.handlers)

Output

Python Syslog

Here at first, we have imported the logging module, an in-built module of python. Then we have imported logging handlers, which sends the log records to the appropriate destination. Next, we have imported SYS, as discussed above. Then in the next step, we have created an object with getlogger. Then in the next step, we have used the setlevel command. What it does is that all messages before this level are ignored. Then we have used our Sysloghandeler command. Next, we have used addHandler to add a specific handler to our logger “syslog” in our case. Finally, we have just used a print statement to print that handler.

Difference Between Syslog and Logging

This section will discuss the basic difference between the Syslog and logging in python. Here we have discussed Syslog in detail but before comparing the 2, let us look at the logging definition. It is an in-built module of python that helps the programmer keep track of events that are taking place. The basic difference between the 2 is that Syslog is more powerful, whereas the logging is easy and used for simple purposes. Another advantage of Syslog over logging is that it can send log lines to a different computer to have it logged there.

You might be interested in reading:

Conclusion

In this article, we covered Python Syslog. We looked at its definition, use, and the different methods associated with it. In the end, we can conclude that it provides us with an interface of Unix.

I hope this article was able to clear all doubts. But in case you have any unsolved queries feel free to write them below in the comment section. Done reading this why not look at the FizzBuzz challenge next.

The post What is Python Syslog? Explained with Different methods appeared first on Python Pool.

Python Pool: How to Solve “unhashable type: list” Error in Python [Planet Python]

Hello geeks, and welcome in this article, we will be covering “unhashable type: list.” It is a type of error that we come across when writing code in python. In this article, ur main objective is to look at this error. Along with that, we will also try to troubleshoot and get rid of this error. We will achieve all this with a couple of examples. But first, let us try to get a brief overview of why this error occurs.

Python dictionaries only accept hashable data-types as a key. Here the hashable data-types means those values whose value remains the same during the lifetime. But when we use the list data-type, which is non-hashable, we get this kind of error.

The error-“unhahasble type: list”

In this section, we will look at the reason due to which this error occurs. We will take into account everything discussed so far. Let us see this through an example:

numb ={ 1:'one', [2,10]:'two and ten',11:'eleven'}
print(numb)

Output:

TypeError: unhashable type: 'list'

Here above, we have considered a straightforward example. Here we have used a dictionary to create a number dictionary and then tried to print it. But instead of the output, we get an error because we have used a list type in it. In the next section, we will see how to eliminate the error.

But Before that let us also look at an another example.

country=[
    {
    "name":"India",[28,7]:"states and area",
    "name":"France",[27,48]:"states and area"}
]
print(country)
TypeError: unhashable type: 'list'

Here in the above example, we can see we come across the same problem. Here in this dictionary, we have taken the number of states and their ranking worldwide as the data. Now let’s quickly jump to the next section and eliminate these errors.

Troubleshooting:”unhashable type:list”

In this section, we will try to get rid of the errors. Let us start with the first example here. To rectify it all, we have to do is use a tuple.

numb ={ 1:'one', tuple([2,10]):'two and ten',11:'eleven'}
print(numb)
{1: 'one', (2, 10): 'two and ten', 11: 'eleven'}

With just a slight code change, we can rectify this. Here we have used a tuple, which is a hashable data data-type. Similarly, we can rectify the second error of the second example.

country=[
    {
    "name":"India",tuple([28,7]):"states and area",
    "name":"France",tuple([27,48]):"states and area"}
]
print(country)
[{'name': 'France', (28, 7): 'states and area', (27, 48): 'states and area'}]

Again with the help of tuple we are able to rectify it. It is a simple error and can be rectified easily.

Difference between hashable and unhashable type

In this section, we see the basic difference between the 2 types. Also, we classify the various data-types that we use while coding in python under these 2 types.

HashableUnhashable
For this data-type, the value remains constant throughout.For this data-type, the value is not constant and change.
Some data types that fall under this category are- int, float, tuple, bool, string, bytes.Some data types that fall under this category are- list, set, dict, bytearray.

CONCLUSION

In this article, we covered unhashable type: list error. We looked at why it occurs and the methods by which we can rectify it. To achieve it, we looked at a couple of examples. In the end, we can conclude that this error arises when we use an unhashable data-type in a dictionary.

I hope this article was able to clear all doubts. But in case you have any unsolved queries feel free to write them below in the comment section. Done reading this why not read about the cPickle module.

The post How to Solve “unhashable type: list” Error in Python appeared first on Python Pool.

Andre Roberge: Don't you want to win a free book? [Planet Python]

 At the end of Day 2 of the contest, still only one entry. If this keeps up, by next Monday there will not be a draw for a prize, and we will have a winner by default.


The submission was based on the use of __slots__. In playing around with similar cases, I found an AttributeError message that I had not seen before.  Here's a sample code.

class F:
__slots__ = ["a"]
b = 1

f = F()
f.b = 2

What happens if I execute this code using Friendly-traceback. Normally, there would be an explanation provided below the list of variables. Here we see nothing.



Let's inspect by using the friendly console.





I'll have to take care of this later today. Perhaps you know of other error messages specific to the use of __slots__. If so, and if you are quick enough, you could enter the contest. ;-)

18-01-2021

20:41

GitLab en IBM willen samen devops-proces versnellen [Computable]

Het ontwikkelaarsplatform Gitlab kondigt een strategisch partnership aan met IBM, waarbij GitLab in de IBM Cloud Pak wordt opgenomen. IBM-klanten krijgen zo toegang tot de oplossingen van Gitlab om cloudapplicaties te bouwen.

Bella Italia voor verhuurplatform HousingAnywhere [Computable]

Het Rotterdamse online-vastgoedplatform HousingAnywhere voor jonge werknemers en internationale studenten zet zijn avontuur voort in Italië. De startup heeft de woningverhuursite Stanzazoo uit Milaan overgenomen. HousingAnywhere wil meer zijn dan alleen een boekingsplatform voor woonruimte en...

Website Parler vindt online weer onderdak [Computable]

Parler heeft weer onderdak gevonden. De controversiële website van het sociale-medianetwerk wordt voortaan online gezet door Epik, dat ook de rechtse sites Gab en Daily Stormer host. De app van Parler is nog altijd niet terug...

13 questions for a quantum architect [Linuxtoday.com]

With quantum computing on the horizon, take a look at which type of architect would be needed and what companies need to consider to build such complex systems.

5 reasons why you should develop a Linux container [Linuxtoday.com]

If you've shunned containers in the past, these five advantages will make you rethink containerization.

Real Python: Make Your First Python Game: Rock, Paper, Scissors! [Planet Python]

Game programming is a great way to learn how to program. You use many tools that you’ll see in the real world, plus you get to play a game to test your results! An ideal game to start your Python game programming journey is rock paper scissors.

In this tutorial, you’ll learn how to:

  • Code your own rock paper scissors game
  • Take in user input with input()
  • Play several games in a row using a while loop
  • Clean up your code with Enum and functions
  • Define more complex rules with a dictionary

Free Bonus: 5 Thoughts On Python Mastery, a free course for Python developers that shows you the roadmap and the mindset you’ll need to take your Python skills to the next level.

What Is Rock Paper Scissors?

You may have played rock paper scissors before. Maybe you’ve used it to decide who pays for dinner or who gets first choice of players for a team.

If you’re unfamiliar, rock paper scissors is a hand game for two or more players. Participants say “rock, paper, scissors” and then simultaneously form their hands into the shape of a rock (a fist), a piece of paper (palm facing downward), or a pair of scissors (two fingers extended). The rules are straightforward:

  • Rock smashes scissors.
  • Paper covers rock.
  • Scissors cut paper.

Now that you have the rules down, you can start thinking about how they might translate to Python code.

Play a Single Game of Rock Paper Scissors in Python

Using the description and rules above, you can make a game of rock paper scissors. Before you dive in, you’re going to need to import the module you’ll use to simulate the computer’s choices:

import random

Awesome! Now you’re able to use the different tools inside random to randomize the computer’s actions in the game. Now what? Since your users will also need to be able to choose their actions, the first logical thing you need is a way to take in user input.

Take User Input

Taking input from a user is pretty straightforward in Python. The goal here is to ask the user what they would like to choose as an action and then assign that choice to a variable:

user_action = input("Enter a choice (rock, paper, scissors): ")

This will prompt the user to enter a selection and save it to a variable for later use. Now that the user has selected an action, the computer needs to decide what to do.

Make the Computer Choose

A competitive game of rock paper scissors involves strategy. Rather than trying to develop a model for that, though, you can save yourself some time by having the computer select a random action. Random selections are a great way to have the computer choose a pseudorandom value.

You can use random.choice() to have the computer randomly select between the actions:

possible_actions = ["rock", "paper", "scissors"]
computer_action = random.choice(possible_actions)

This allows a random element to be selected from the list. You can also print the choices that the user and the computer made:

print(f"\nYou chose {user_action}, computer chose {computer_action}.\n")

Printing the user and computer actions can be helpful to the user, and it can also help you debug later on in case something isn’t quite right with the outcome.

Determine a Winner

Read the full article at https://realpython.com/python-rock-paper-scissors/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Chris Moffitt: Case Study: Automating Excel File Creation and Distribution with Pandas and Outlook [Planet Python]

Introduction

I enjoy hearing from readers that have used concepts from this blog to solve their own problems. It always amazes me when I see examples where only a few lines of python code can solve a real business problem and save organizations a lot of time and money. I am also impressed when people figure out how to do this with no formal training - just with some hard work and willingness to persevere through the learning curve.

This example comes from Mark Doll. I’ll turn it over to him to give his background:

I have been learning/using Python for about 3 years to help automate business processes and reporting. I’ve never had any formal training in Python, but found it to be a reliable tool that has helped me in my work.

Read on for more details on how Mark used Python to automate a very manual process of collecting and sorting Excel files to email to 100’s of users.

The Problem

Here’s Mark’s overview of the problem:

A business need arose to send out emails with Excel attachments to a list of ~500 users and presented us with a large task to complete manually. Making this task harder was the fact that we had to split data up by user from a master Excel file to create their own specific file, then email that file out to the correct user.

Imagine the time it would take to manually filter, cut and paste the data into a file, then save it and email it out - 500 times! Using this Python approach we were able to automate the entire process and save valuable time.

I have seen this type of problem multiple times in my experience. If you don’t have experience with a programming language, then it can seem daunting. With Python, it’s very feasible to automate this tedious process. Here’s a graphical view of what Mark was able to do:

File paths

Solving the Problem

The first step is getting the imports in place:

import datetime
import os
import shutil
from pathlib import Path
import pandas as pd
import win32com.client as win32

Now we will set up some strings with the current date and our directory structure:

## Set Date Formats
today_string = datetime.datetime.today().strftime('%m%d%Y_%I%p')
today_string2 = datetime.datetime.today().strftime('%b %d, %Y')

## Set Folder Targets for Attachments and Archiving
attachment_path = Path.cwd() / 'data' / 'attachments'
archive_dir = Path.cwd() / 'archive'
src_file = Path.cwd() / 'data' / 'Example4.xlsx'

Let’s take a look at the data file we need to process:

df = pd.read_excel(src_file)
df.head()
Excel file view

The next step is to group all of the CUSTOMER_ID transactions together. We start by doing a groupby on CUSTOMER_ID .

customer_group = df.groupby('CUSTOMER_ID')

It might not be apparent to you what customer_group is in this case. A loop shows how we can process this grouped object:

for ID, group_df in customer_group:
    print(ID)
A1000
A1001
A1002
A1005

Here’s the last group_df that shows all of the transactions for customer A1005:

Excel file view

We have everything we need to create an Excel file for each customer and store in a directory for future use:

## Write each ID, Group to Individual Excel files and use ID to name each file with Today's Date
attachments = []
for ID, group_df in customer_group:
    attachment = attachment_path / f'{ID}_{today_string}.xlsx'
    group_df.to_excel(attachment, index=False)
    attachments.append((ID, str(attachment)))

The attachments list contains the customer ID and the full path to the file:

[('A1000',
'c:\\Users\\chris\\notebooks\\2020-10\\data\\attachments\\A1000_01162021_12PM.xlsx'),
('A1001',
'c:\\Users\\chris\\notebooks\\2020-10\\data\\attachments\\A1001_01162021_12PM.xlsx'),
('A1002',
'c:\\Users\\chris\\notebooks\\2020-10\\data\\attachments\\A1002_01162021_12PM.xlsx'),
('A1005',
'c:\\Users\\chris\\notebooks\\2020-10\\data\\attachments\\A1005_01162021_12PM.xlsx')]

To make the processing easier, we convert the list to a DataFrame:

df2 = pd.DataFrame(attachments, columns=['CUSTOMER_ID', 'FILE'])
File paths

The final data prep stage is to generate a list of files with their email addresses by merging the DataFrames together:

email_merge = pd.merge(df, df2, how='left')
combined = email_merge[['CUSTOMER_ID', 'EMAIL', 'FILE']].drop_duplicates()

Which gives this simple DataFrame:

File paths

We’ve gathered the list of customers, their emails and the attachments. Now we need to send an email with Outlook. Refer to this article for additional explanation of this code:

# Email Individual Reports to Respective Recipients
class EmailsSender:
    def __init__(self):
        self.outlook = win32.Dispatch('outlook.application')

    def send_email(self, to_email_address, attachment_path):
        mail = self.outlook.CreateItem(0)
        mail.To = to_email_address
        mail.Subject = today_string2 + ' Report'
        mail.Body = """Please find today's report attached."""
        mail.Attachments.Add(Source=attachment_path)
        # Use this to show the email
        #mail.Display(True)
        # Uncomment to send
        #mail.Send()

We can use this simple class to generate the emails and attach the Excel file.

email_sender = EmailsSender()
for index, row in combined.iterrows():
    email_sender.send_email(row['EMAIL'], row['FILE'])
Outlook Email

The last step is to move the files to our archive directory:

# Move the files to the archive location
for f in attachments:
    shutil.move(f[1], archive_dir)

Summary

This example does a nice job of automating a highly manual process where someone likely did a lot of copying and pasting and manual file manipulation. I hope the solution that Mark developed can help you figure out how to automate some of the more painful parts of your job.

I encourage you to use this example to identify similar challenges in your day to day work. Maybe you don’t have to work with 100’s of files but you might have a manual process you run once a week. Even if that process only takes 1 hour, use that as a jumping off point to figure out how to use Python to make it easier. There is no better way to learn Python than to apply it to one of your own problems.

Thanks again to Mark for taking the time to walk us through this content example!

Zato Blog: Why Zato and Python make sense for complex API integrations [Planet Python]

This article is an excerpt from the broader set of changes to our documentation in preparation for Zato.

High-level overview

Zato and Python logo

Zato is a highly scalable, Python-based integration platform for APIs, SOA and microservices. It is used to connect distributed systems or data sources and to build API-first, backend applications. The platform is designed and built specifically with Python users in mind.

Zato is used for enterprise, business integrations, data science, IoT and other scenarios that require integrations of multiple systems.

Real-world, production Zato environments include:

  • A platform for processing payments from consumer devices

  • A system for a telecommunication operator integrating CRM, ERP, Billing and other systems as well as applications of the operator's external partners

  • A data science system for processing of information related to securities transactions (FIX)

  • A platform for public administration systems, helping achieve healthcare data interoperability through the integration of independent data sources, databases and health information exchanges (HIE)

  • A global IoT platform integrating medical devices

  • A platform to process events produced by early warning systems

  • Backend e-commerce systems managing multiple suppliers, marketplaces and process flows

  • B2B platforms to accept and process multi-channel orders in cooperation with backend ERP and CRM systems

  • Platforms integrating real-estate applications, collecting data from independent data sources to present unified APIs to internal and external applications

  • A system for the management of hardware resources of an enterprise cloud provider

  • Online auction sites

  • E-learning platforms

Zato offers connectors to all the popular technologies, such as REST, SOAP, AMQP, IBM MQ, SQL, Odoo, SAP, HL7, Redis, MongoDB, WebSockets, S3 and many more.

Running on premises, in the cloud, or under Docker, Kubernetes and other container technologies, Zato services are optimised for high performance - it is easily possible to run hundreds and thousands of services on typical server instances as offered by Amazon, Google Cloud, Azure or other cloud providers.

Zato servers offer high availability and no-downtime deployment. Servers form clusters that are used to scale systems both horizontally and vertically.

The software is 100% Open Source with commercial and community support available

A platform and language for interesting, reusable and atomic services

Zato promotes the design of, and helps you build, solutions composed of services which are interesting, reusable and atomic (IRA):

  • I for Interesting - each service should make its clients want to use it more and more. People should immediately see the value of using the service in their processes. An interesting service is one that strikes everyone as immediately useful in wider contexts, preferably with few or no conditions, prerequisites and obligations. An interesting service is aesthetically pleasing, both in terms of its technical usage as well as in its potential applicability in fields broader than originally envisaged. If people check the service and say "I know, we will definitely use it" or "Why don't we use it" you know that the service is interesting. If they say "Oh no, not this one again" or "No, thanks, but no" then it is the opposite.
  • R for Reusable - services can be used in different, independent business processes
  • A for Atomic - each service fullfils a single, atomic business need

Each service is deployed independently and, as a whole, they constitute an implementation of business processes taking place in your company or organisation.

With Zato, developers use Python to focus on the business logic exclusively and the platform takes care of scalability, availability, communication protocols, messaging, security or routing. This lets developers concentrate only on what is the very core of systems integrations - making sure their services are IRA.

Python is the perfect choice for API integrations, SOA and microservices, because it hits the sweet spot under several key headings:

  • It is a very high level language, with syntax close to how grammar of various spoken languages works, which makes it easy to translate business requirements into implementation
  • Yet, it is a solid, mainstream and full-featured, real programming language rather than a domain-specific one which means that it offers to developers a great degree of flexibility and choice in expressing their needs
  • Many Python developers have a strong web programming / open source background which means that it is little effort to take a step further, towards API integrations and backend servers. In turn, this means that it is easy to find good people for API projects.
  • Many Python developers have knowledge of multiple programming languages - this is very useful in the context of integration projects where one is typically faced with dozens of technologies, vendors or integration methods and techniques
  • Lower maintenance costs - thanks to the language's unique design, Python programmers tend to produce code that is easy to read and understand. From the perspective of multi-year maintenance, reading and analysing code, rather than writing it, is what most programmers do most of the time so it makes sense to use a language which makes it easy to carry out the most common tasks.

In short, Python can be construed as executable pseudo-code with many of its users already having roots in modern server-side programming so Zato, both from a technical and strategic perspective, is a natural choice for complex and sophisticated API solutions as a platform built in the language and designed for Python developers from day one.

More than services

Systems integrations commonly require two more features that Zato offers as well:

  • File transfer - allows you to move batch data between locations and to distribute it among systems and APIs

  • Single Sign-On (SSO) - a convenient REST interface lets you easily provide authentication and authorisation to users across multiple systems

Next steps

  • Start the tutorial to learn more technical details about Zato, including its architecture, installation and usage. After completing it, you will have a multi-protocol service representing a sample scenario often seen in banking systems with several applications cooperating to provide a single, consistent API to its callers.

  • Visit the support page if you would like to discuss anything about Zato with its creators

Python Pool: 6 Ways to Plot a Circle in Matplotlib [Planet Python]

Hello coders!! In this article, we will learn how to make a circle using matplotlib in Python. A circle is a figure of round shape with no corners. There are various ways in which one can plot a circle in matplotlib. Let us discuss them in detail.

Method 1: matplotlib.patches.Circle():

  • SYNTAX:
    • class matplotlib.patches.Circle(xyradius=r, **kwargs)
  • PARAMETERS:
    • xy: (x,y) center of the circle
    • r: radius of the circle
  • RESULT: a circle of radius r with center at (x,y)
import matplotlib.pyplot as plt 

figure, axes = plt.subplots() 
cc = plt.Circle(( 0.5 , 0.5 ), 0.4 ) 

axes.set_aspect( 1 ) 
axes.add_artist( cc ) 
plt.title( 'Colored Circle' ) 
plt.show()

Output & Explanation:

matplotlib.patches.CircleOutput

Here, we have used the circle() method of the matplotlib module to draw the circle. We adjusted the ratio of y unit to x unit using the set_aspect() method. We set the radius of the circle as 0.4 and made the coordinate (0.5,0.5) as the center of the circle.

Method 2: Using the equation of circle:

The equation of circle is:

  • x = r cos θ
  • y = r sin θ

r: radius of the circle

This equation can be used to draw the circle using matplotlib.

import numpy as np 
import matplotlib.pyplot as plt 

angle = np.linspace( 0 , 2 * np.pi , 150 ) 

radius = 0.4

x = radius * np.cos( angle ) 
y = radius * np.sin( angle ) 

figure, axes = plt.subplots( 1 ) 

axes.plot( x, y ) 
axes.set_aspect( 1 ) 

plt.title( 'Parametric Equation Circle' ) 
plt.show() 

Output & Explanation:

equation of circleOutput

In this example, we used the parametric equation of the circle to plot the figure using matplotlib. For this example, we took the radius of the circle as 0.4 and set the aspect ratio as 1.

Method 3: Scatter Plot to plot a circle:

A scatter plot is a graphical representation that makes use of dots to represent values of the two numeric values.  Each dot on the xy axis indicates value for an individual data point.

  • SYNTAX:
    • matplotlib.pyplot.scatter(x_axis_data, y_axis_data, s=None, c=None, marker=None, cmap=None, vmin=None, vmax=None, alpha=None, linewidths=None, edgecolors=None)
  • PARAMETERS:
    • x_axis_data-  x-axis data
    • y_axis_data- y-axis data
    • s- marker size
    • c- color of sequence of colors for markers
    • marker- marker style
    • cmap- cmap name
    • linewidths- width of marker border
    • edgecolor- marker border-color
    • alpha- blending value
import matplotlib.pyplot as plt 

plt.scatter( 0 , 0 , s = 7000 ) 

plt.xlim( -0.85 , 0.85 ) 
plt.ylim( -0.95 , 0.95 ) 

plt.title( "Scatter plot of points Circle" ) 
plt.show()

Output & Explanation:

Scatter PlotOutput

Here, we have used the scatter plot to draw the circle. The xlim() and the ylim() methods are used to set the x limits and the y limits of the axes respectively. We’ve set the marker size as 7000 and got the circle as the output.

Method 4: Matplotlib hollow circle:

import matplotlib.pyplot as plt 

plt.scatter( 0 , 0 , s=10000 ,  facecolors='none', edgecolors='blue' ) 

plt.xlim( -0.5 , 0.5 ) 
plt.ylim( -0.5 , 0.5 ) 

plt.show()

Output & Explanation:

Matplotlib hollow circleOutput

To make the circle hollow, we have set the facecolor parameter as none, so that the circle is hollow. To differentiate the circle from the plane we have set the edgecolor as blue for better visualization.

Method 5: Matplotlib draw circle on image:

import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.cbook as cb


with cb.get_sample_data('C:\\Users\\Prachee\\Desktop\\cds\\img1.jpg') as image_file:
    image = plt.imread(image_file)

fig, ax = plt.subplots()
im = ax.imshow(image)
patch = patches.Circle((100, 100), radius=80, transform=ax.transData)
im.set_clip_path(patch)

ax.axis('off')
plt.show()

Output & Explanation:

draw circle on imageOutput

In this example, we first loaded our data and then used the axes.imshow() method. This method is used to display data as an image. We then set the radius and the center of the circle. Then using the set_clip_path() method we set the artist’s clip-path.

Method 6: Matplotlib transparent circle:

import matplotlib.pyplot as plt 

figure, axes = plt.subplots() 
cc = plt.Circle(( 0.5 , 0.5 ), 0.4 , alpha=0.1) 

axes.set_aspect( 1 ) 
axes.add_artist( cc ) 
plt.title( 'Colored Circle' ) 
plt.show()

Output & Explanation:

Matplotlib transparent circleOutput

To make the circle transparent we changed the value of the alpha parameter which is used to control the transparency of our figure.

Conclusion:

With this, we come to an end with this article. These are the various ways in which one can plot a circle using matplotlib in Python.

However, if you have any doubts or questions, do let me know in the comment section below. I will try to help you as soon as possible.

Happy Pythoning!

The post 6 Ways to Plot a Circle in Matplotlib appeared first on Python Pool.

"CodersLegacy": Python GUI Frameworks [Planet Python]

This article covers the most popular GUI Frameworks in Python.

One of Python’s strongest selling points is the vast number of GUI libraries available for GUI development. GUI development can be a tricky task, but thanks to the tools these Python GUI frameworks provide us, things become much simpler.

While some of the below GUI libraries are similar and directly compete with each other, each library has it’s own pros and cons. Sometimes you have special libraries designed for a specific situation, like Kivy is for touchscreen devices. So you don’t have to learn just one.

There are a large number of GUI frameworks in Python and we couldn’t possibly cover all of them. Hence we’ll just be discussing 5 of the most popular and important GUI frameworks in Python.


Tkinter GUI

I decided to start with Tkinter as it’s probably the oldest and most well known GUI framework in Python.

Tkinter was released in 1991 and quickly gained popularity due to its simplicity and ease of use compared to other GUI toolkits at the time. In fact, Tkinter is now included in the standard Python Library, meaning you don’t have to download and install it separately.

Other plus points include the fact that Tkinter has a pretty small memory footprint and a quick start up time. If you were to convert a Tkinter application into an exe with something like pyinstaller, it’s size would smaller than the other GUI library equivalents.

The only downsides to Tkinter are it’s rather outdated and old design. If you’re goal is to create a sleek and modern-looking GUI, Tkinter probably isn’t the best choice. Another possible downside is that Tkinter has fewer “special” widgets than the others, such as a VideoPlayer widget. Such widgets are used rarely, but still important.

You can begin learning it with our very own Tkinter Tutorial series.


PyQt5

Python GUI Frameworks: PyQt5

PyQt5 is the Python binding of the popular Qt GUI framework which is written in C++.

PyQt5’s main plus points is it’s cross platform ability and modern looking GUI. Personally I’ve noticed quite a few people switching from Tkinter to PyQt5 to be able to create for stylish GUI’s.

Another one of PyQt5’s plus points is the Qt Designer. The Qt Designer is a drag and drop kind of tool where you don’t have to code in each widget individually. Instead, you can simply “drag” the widget and “drop” it onto the screen to create a GUI. It’s similar to Windows Form (VB.NET) and the Scene Builder (JavaFX).

PyQt5 downsides include it’s relatively large package size and slow start up speed. Furthermore, PyQt was released under the GPL License. This means you cannot distribute any software containing PyQt code without bundling the source code with it as well. For someone selling commercial software, this a significant set back. You’ll have to buy a special commercial license which gives you the right to withhold the source code.

The license issue isn’t something just should bother the average programmer though. You can begin learning PyQt from our very own tutorial series here!


If you’ve narrowed down your GUI of choice between Tkinter and PyQt5 and are having a hard time picking one, I suggest you read this comparison article that compares both in a very detailed manner.


PySide2

We’re bringing up PySide right after PyQt5 due to their strong connection. PySide is also a Python binding of the popular Qt GUI framework. Because of this reason the syntax is almost the exact same with some very minor differences.

The reason why PyQt is used more nowadays is because it’s development was faster than that of PySide. When Qt5 was released, PyQt released their binding for it (called PyQt5) in 2016. Whereas it took PySide an extra 2 years to release PySide2 in 2018. If both had released at the same time, things might have been a bit different today.

All the plus points for PyQt5 also apply for PySide2, with an extra addition. Unlike PyQt5, PySide was released under the LGPL license, allowing you to keep the source code for your distributed programs private. This makes the selling of commercial applications easier than it would be using PyQt5.

You can learn more about PyQt5 vs PySide2 from this article here.


Kivy

Kivy is an opensource multi-platform GUI development library for Python and can run on iOS, Android, Windows, OS X, and Linux.

The Kivy framework is well known for it’s support for touchscreen devices and it’s clean and modern looking GUI’s. It’s GUI and widgets have the interactive, multi-touch kind of ability that’s required for any decent GUI on a touchscreen device like a mobile.

The one possible downside to GUI’s created with Kivy is the non-native look. This may or may not be something you wish to have. Other issues may include the smaller community and lack of documentation compared to more popular GUI libraries like Tkinter.

If you’re looking to be developing on Desktop mostly, then it’s better to stick to one of the Qt options. Mobile support is Kivy’s greatest draw after all.


wxPython

wxPython is a Python open source cross platform GUI toolkit. Similar to how PyQt5 is based of the Qt GUI framework, WxPython is also based of a GUI framework called wxWidgets written in C++.

It’s purpose is to allow Python developers to create native user interfaces for their GUI applications on a wide variety of different operating systems.

The native GUI ability makes GUI’s created by wxPython looks very natural on any Operating system that they are run. Although some people may not want to have this native GUI look, instead preferring to have one look/style that is the exact same across all platforms.


This marks the end of the Python GUI Frameworks article. Any suggestions or contributions for CodersLegacy are more than welcome. Questions regarding the article content can be asked in the comments section below.

The post Python GUI Frameworks appeared first on CodersLegacy.

Mike Driscoll: PyDev of the Week: Claudia Regio [Planet Python]

This week we welcome Claudia Regio (@ClaudiaRegio) as our PyDev of the Week! Claudia is a program manager for Python Data Science with a focus on Python Notebooks in Visual Studio Code at Microsoft. She also blogs on Microsoft’s dev blog.

Let’s spend some time getting to know Claudia better!

Can you tell us a little about yourself (hobbies, education, etc):

I am originally from Italy and moved to the Greater Seattle Area when I was 4 years old. Growing up I lived and breathed squash, and had it not been for COVID I still would be! I have been and always will be a huge math nerd and have been tutoring in math for over 10 years now. I attended the University of Washington where I majored in Applied Physics and received two minors in Comprehensive Mathematics and Applied Mathematics while a member of both the Delta Zeta Sorority and the UW Men’s Squash Team.

After graduating I pursued a Data Science Certificate from the University of Washington to enhance my data analysis + data science skills while working at T-Mobile as a Systems Network Architecture Engineer.

Two years after working in that role, I transitioned to Program Manager at Microsoft for the Python Extension in VS Code, focusing on the development of the Data Science & AI components and features.

Why did you start using Python?

The courses in my data science certificate got me started on Python back in 2017.

What other programming languages do you know and which is your favorite?

I learned Java during my time in college and while I enjoyed Java being a strongly typed language, no language beats Python when it comes to data science.

What projects are you working on now?

I am currently managing the Python + Jupyter Extension partnership in VS Code. While our recently released Jupyter Extension provides Jupyter Notebook support for other languages in VS Code Insiders, I focus on the collaboration of the two extensions to create an optimal notebooks experience for data scientists using Python.

Which Python libraries are your favorite (core or 3rd party)?

Scikit-learn will forever have my heart 3 What do you see as the best features of Python Notebooks?

Our best features in Python Notebooks currently include the variable explorer, data viewer, and my personal favorite, Gather (a complimentary VS Code extension).

When experimenting and prototyping in a notebook, it can often become busy as a user explores different approaches. After eventually reaching the desired result (for instance a specific visualization of the data) a user would then need to manually curate the cells involved with this specific flow. This task can be laborious and error-prone, leaving users without a strong approach for aggregating related code. A second scenario that is common among Notebooks users is that software engineers are tasked with turning an existing notebook into a production-ready script. The process of pulling out unneeded imports, graphs, outputs is often highly time consuming and can lead to errors as well. Gather is a complimentary VS Code extension that grabs all the relevant and dependent code required to create the contents of a selected cell and extracts those code dependencies into a new notebook or Python script. This helps save data scientists and developers a lot of notebook cleaning time!

Why should Python developers and data scientists use Visual Studio Code over another editor?

VS Code is a free and open source editor with a family of extensions (from both Microsoft and the open source community), products, and features that aim to make a seamless experience for developers and data scientists. A few examples include:

  • Python (Comes with the Jupyter Extension): Includes features such as IntelliSense, linting, debugging, code navigation, code formatting, Jupyter notebook support, refactoring, variable explorer, test explorer, snippets, and more!
  • Pylance: Language server that supercharges your Python IntelliSense experience with rich type information, helping you write better code faster.
  • Live Share: Enables you to collaboratively edit and debug with others in real-time, regardless of what programming languages you’re using or app types you’re building. It allows you to instantly (and securely) share your current project, and then as needed, share debugging sessions, terminal instances, localhost webapps, voice calls, and more!
  • Gather: A code cleaning tool that uses a static analysis technique to find and then copy all of the dependent code that was used to generate that cell’s result into a new notebook or script.
  • Coding Pack for Python Installer: An installer pack that helps students and new coders quickly get started by installing VS Code, all of the extensions above, as well as Python and common packages such as numpy and pandas.
  • Azure Machine Learning: Easily build, train, and deploy machine learning models to the cloud or the edge with Azure Machine Learning service from the Visual Studio Code interface.
  • Over 350+ community contributed Python-related extensions on the VS Code Marketplace!

It is the partnership constructed amongst these extensions and the open-source community as well as the Developer Division mindset to always build for the customer that creates an unmatchable experience for both developers and data scientists in VS Code.

Is there anything else you’d like to say?

I would like to thank the incredible team I get to work with (David Kutugata, Don Jayamanne, Ian Huff, Jim Griesmer, Joyce Er, Rich Chiodo, Rong Lu) who make this tool come to life and a thank you to all the customers who engage with us and are helping us build the best tool for data scientists!

If anyone would like to provide any additional feedback, feature requests, or help contribute back to the product you can do so here!

Thanks for doing the interview, Claudia!

The post PyDev of the Week: Claudia Regio appeared first on Mouse Vs Python.

11:22

TestDriven.io: Adding Social Authentication to Django [Planet Python]

This tutorial details how to set up social auth with Django and Django Allauth.

Zero-with-Dot (Oleg Żero): Run Jupyter Lab on Google Colaboratory [Planet Python]

Introduction

It’s been quite some time since we wrote on any “engineering-like” topic. As we all want to stay efficient and productive, it is a good time to revisit Google Colaboratory.

Google Colaboratory, or Colab for short, has been a great platform for data scientists or machine-learning enthusiasts in general. It offers a free instance of GPU and TPU for a limited time plus it serves a prettifiied version of a Jupyter notebook. It is a great combination of variouofmaller or mid-size projects.

Unfortunately, it comes with certain limitations. The biggest ones are the lack of storage persistency, as well as being sort of confined to a single document. Both limitations complicate the development and make working with multiple files less straightforward.

While some good solutions have been developed by the community (including my previous work here and here), many of us are still on the lookout for something like a “data studio” aka Jupyter Lab.

In this article, we will show how to install and run a Jupyter Lab instance on the Google machine through Colab, turning it into a custom solution with Jupyter Lab frontend and GPU/TPU backend for free. What is more, the approach presented here is generic and will allow you to other services such as Flask as well. It differs from solutions presented here or here, as they show how to connect the Jupyter Colab frontend to a local instance. Here, we will do the exact opposite, so stay on!

General idea

The main idea is to utilize the server that resides behind the Colab notebook, and uses its backend powers, but replacing the frontend. For it to work, the steps go as follows:

  • Tap to the server behind the notebook.
  • Install all the packages we need (e.g. Jupyter Lab).
  • Establish a communication channel.
  • Connect to it and have fun.

Getting started

Go over to https://colab.research.google.com to start a new instance, connect to it, and wait for the resources to be allocated. If you want, now is the time to switch the backend to either GPU or TPU (unless you want to repeat all the steps).

/assets/jupyter-lab-colab/jupyter-lab-colab-1.png Figure 1. The proof we have connected to the Google backend.

Preparing the workspace

The first “hack”

Now, we need to go deeper and talk to the machine behind the notebook rather than with the notebook itself. The standard way to interact with the shell underneath is to prefix bash commands with ! (e.g. !ls -la). However, it may generate some problems later, so it is better to use an alternative way, mainly execute

1
eval "$SHELL"

in a cell, which will let us communicate directly with the console behind.

Installing Jupyter Lab

Next, we install Jupyter Lab or any other thing for that matter. Natively, Colab does not have it installed, which you can confirm by executing:

1
2
3
4
5
6
7
8
9
!pip list | egrep jupyter

# output
jupyter                       1.0.0          
jupyter-client                5.3.5          
jupyter-console               5.2.0          
jupyter-core                  4.7.0          
jupyterlab-pygments           0.1.2          
jupyterlab-widgets            1.0.0  
1
2
3
4
5
6
7
8
9
10
11
12
13
!pip install jupyterlab
!pip list | egrep jupyter

# output
jupyter                       1.0.0          
jupyter-client                6.1.11         
jupyter-console               5.2.0          
jupyter-core                  4.7.0          
jupyter-server                1.2.2          
jupyterlab                    3.0.5          
jupyterlab-pygments           0.1.2          
jupyterlab-server             2.1.2          
jupyterlab-widgets            1.0.0          

So now, we have all we need when it comes to the Python environment, but we still need to expose it outside of the notebook. For this, we will do the so-called reverse ssh tunnel.

Reverse SSH tunnel

The reverse SSH tunneling allows using the existing connection between two machines to set up a new connection channel back from the local machine to the remote one. As this article explains:

Because the original connection came from the remote computer to you, using it to go in the other direction is using it “in reverse.” And because SSH is secure, you’re putting a secure connection inside an existing secure connection. This means your connection to the remote computer acts as a private tunnel inside the original connection.

Now, as per the vocabulary used by the article, the local machine is actually the Google server that runs Colab. It is this machine’s port we would like to expose to the outside world. However, as we don’t know the outside address of our local machine (or the “remote” one as per the article’s vocabulary), we use a third-party service, namely http://localhost.run/.

This solution was suggested by haqpl, who is a professional pentester and a friend of mine.

It acts as both the end to the reverse SSH tunnel and a normal HTTP server, allowing us to use it as a bridge in communication. In other words, the service completes the SSH tunnel on one end, and an HTTP server on the other connecting our local PC to whatever service we run on Colab.

Generate a public key

Before we start, there is one thing we n ed to take care of. We need a key pair to secure the SSH channel without a password.

This is the easy part. Detailed instructions can be found on GitHub. For us, it is enough we execute the following commands. Don’t worry about a passphrase. Just hit enter.

1
ssh-keygen -t ed25519 -C "your_email@example.com"

By default, the keys are stored under /root/.ssh/id_ed25519.pub. Next, confirm we have the ssh-agent and register the key.

1
2
3
4
5
eval "$(ssh-agent -s)"
ssh-add

# expected response
Identity added: /root/.ssh/id_ed25519 (your_email@example.com)

At this point, we are ready to test the tunnel.

Test the connection

To initialize the connection, we need to pick a port that is unlikely to be used by the system already. For example 9999. It’s a nice number, isn’t it? Then, the command to execute will map this port to port 80 (the standard port for HTTP connection). Additionally, we need to make the system turn a blind eye to who the host is. Hence the -o flag.

1
ssh -o StrictHostKeyChecking=no -R 80:localhost:9999 ssh.localhost.run

If all goes well, the last line of the response should give you the URL of where to point your local machine to.

/assets/jupyter-lab-colab/jupyter-lab-colab-2.png Figure 2. The SSH reverse tunnel has been established. In our case, the URL is `root-3e42408d.localhost.run`.

However, when you copy-paste it to your browser, the most likely response you will get is Something went wrong opening the port forward, check your SSH command output for clues!. This is OK, as there is really no service running at this port (yet).

Let’s start a small python server under this port (or change it if you used it before).

1
python -m http.server 9999 & ssh -o StrictHostKeyChecking=no -R 80:localhost:9999 ssh.localhost.run

When the connection is established, you should be able to browse through the files on Colab in your browser, seeing lines like this:

1
2
3
4
5
===============================================================================
root-be893e68.localhost.run tunneled with tls termination
127.0.0.1 - - [14/Jan/2021 22:39:21] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [14/Jan/2021 22:39:22] code 404, message File not found
127.0.0.1 - - [14/Jan/2021 22:39:22] "GET /favicon.ico HTTP/1.1" 404 -

printed out in the notebook.

This is a very good sign! It means that if we can run python -m http.server, we can just as well replace it with Flask or Jupyter Lab, and this is exactly what we are about to do!

The final hit

Now, let’s change the port and start the Lab instead. The command to run is a bit lengthy, and the reason is that we must pass the necessary arguments:

  • --ip=0.0.0.0 to stress this is a local machine (local to Colab).
  • --port=8989, of course,
  • --allow-root, otherwise Lab will start but refuse to communicate with you.
1
jupyter lab --ip=0.0.0.0 --port=8989 --allow-root & ssh -o StrictHostKeyChecking=no -R 80:localhost:8989 ssh.localhost.run

Again, if all goes correctly, then by virtue of the SSH tunnel, we should access the Jupyter Lab externally using the URL issued earlier. The only add-ons are will be the parameters and the token you can read from the response.

/assets/jupyter-lab-colab/jupyter-lab-colab-3.png Figure 3. The confirmation that the Jupyter Lab instance is running.

Take the URL given and copy-paste it to another tab in your browser. Remember to replace the localhost:8989 with the URL received earlier.

/assets/jupyter-lab-colab/jupyter-lab-colab-4.png Figure 4. The confirmation that the Jupyter Lab is indeed connected to the Colab backend.

Conclusion

This is it! The combination of Jupyter Lab and Google Colaboratory that we created thanks to reverse SSH tunneling (and haqpl), gives probably the ultimate freedom, as now you have simplified access to upload/download of your files, convenience of organizing your project across multiple files, and support of really powerful hardware to your calculations… for free.

One word, before we go… Remember that despite the SSH channel is secure, the session is open to whoever knows of your URL. If, as a part of your work, you decided to attach e.g. Google Drive to the machine, there is a chance someone may access your files even without you knowing. So please, use this “hack” carefully. Alternatively, you may consider using a virtual private server (VPS) to replace the localhost.run and give exclusive ownership of the endpoint.

Thanks for reading! Please, let me know in the comments in case you stumble across problems or have any suggestions. Good luck and have fun!

Ned Batchelder: Flourish [Planet Python]

Flourish is a visual toy app that draws harmonographs, sinuous curves simulating a multi-pendulum trace:

Front page of Flourish, showing thumbnails of harmonographs

Each harmonograph is determined by a few dozen parameter values, which are chosen randomly. The number of parameters depends on the number of pendulums, which defaults to 3.

Click a thumbnail to see a larger version. The large-image pages have thumbnails of “adjacent” images. Each harmonograph is determined by a few dozen parameter values. For each parameter, four nearby values are substituted, giving four thumbnails for each parameter. Clicking an adjacent thumbnail continues your exploration of the parameter space:

A large harmonograph, with adjacent thumbnails

The settings dialog lets you adjust the number of pendulums (which determines the number of parameters) and the kinds of symmetry you are interested in.

I started this because I wanted to understand how the parameters affected the outcome, but I was also interested to give it a purely visual design. As an engineer, it was tempting to present the values of the parameters quantitatively, but I like the simplicity of just clicking curves you like.

I repeated a trick I’ve used in other mathy visual toys: when you download a PNG file of an image, the parameter values are stored in a data island in the file. You can re-upload the image, and Flourish will extract the parameters and put you back into the parameter-space exploration at that point.

This is one of those side projects that let me use different sorts of things than I usually do: numpy, SVG, sass, Docker, and so on. I had more ideas for things to add (there is color in the code but not the UI). Maybe someday I will build them.

BTW, I am happy that my first post of 2021 is called “Flourish.” I hope it is a harbinger of things to come.

Python Pool: 7 Ways in Python to Capitalize First Letter of a String [Planet Python]

Hello coders!! In this article, we will be learning how one can capitalize the first letter in the string in Python. There are different ways to do this, and we will be discussing them in detail. Let us begin!

Method 1: str.capitalize() to capitalize the first letter of a string in python:

  • Syntax: string.capitalize()
  • Parameters: no parameters
  • Return Value: string with the first capital first letter
string = "python pool"
print("Original string:")
print(string)
print("After capitalizing first letter:")
print(string.capitalize())

Output & Explanation:

str.capitalize() to capitalizeOutput

When we use the capitalize() function, we convert the first letter of the string to uppercase. In this example, the string we took was “python pool.” The function capitalizes the first letter, giving the above result.

Method 2: string slicing + upper():

  • Synatx: string.upper()
  • Parameters: No parameters
  • Return Value:  string where all characters are in upper case
string = "python pool"
print("Original string:")
print(string) 
result = string[0].upper() + string 
print("After capitalizing first letter:")
print(result) 

Output & Explanation:

string slicing + upperOutput

We used the slicing technique to extract the string’s first letter in this example. We then used the upper() method to convert it into uppercase.

Method 3: str.title():

  • Syntax: str.title()
  • Parameters: a string that needs to be converted
  • Return Value: String with every first letter of every word in capital
string = "python pool"
print("Original string:")
print(string)
print("After capitalizing first letter:")
print(str.title(string))

Output & Explanation:

str.title to python capitalize first letterOutput

str.title() method capitalizes the first letter of every word and changes the others to lowercase, thus giving the desired output.

Method 4: capitalize() Function to Capitalize the first letter of each word in a string in Python

string = "python pool"
print("Original string:")
print(string)
print("After capitalizing first letter:")
result = ' '.join(elem.capitalize() for elem in string.split())
print(result)

Output & Explanation:

capitalize() Function to Capitalize the first letter of each word in a string in PythonOutput

In this example, we used the split() method to split the string into words. We then iterated through it with the help of a generator expression. While iterating, we used the capitalize() method to convert each word’s first letter into uppercase, giving the desired output.

Method 5: string.capwords() to Capitalize first letter of every word in Python:

  • Syntax: string.capwords(string)
  • Parameters: a string that needs formatting
  • Return Value: String with every first letter of each word in capital
import string
txt = "python pool"
print("Original string:")
print(txt)
print("After capitalizing first letter:")
result = string.capwords(txt)
print(result)

Output & Explanation:

string.capwords() to Capitalize first letter of every word in PythonOutput

capwords() function not just convert the first letter of every word into uppercase. It also converts every other letter to lowercase.

Method 6: Capitalize the first letter of every word in the list in Python:

colors=['red','blue','yellow','pink']
print('Original List:')
print(colors)
colors = [i.title() for i in colors]
print('List after capitalizing each word:')
print(colors)

Output & Explanation:

Capitalize first letter of every word in listOutput

Iterate through the list and use the title() method to convert the first letter of each word in the list to uppercase.

Method 7:Capitalize first letter of every word in a file in Python

file = open('sample1.txt', 'r') 
for line in file: 
    output = line.title() 
 
    print(output)

Output & Explanation:

Python Pool Is Your Ultimate Destination For Your Python Knowledge

We use the open() method to open the file in read mode. Then we iterate through the file using a loop. After that, we capitalize on every word’s first letter using the title() method.

You might be interested in reading:

Conclusion:

The various ways to convert the first letter in the string to uppercase are discussed above. All functions have their own application, and the programmer must choose the one which is apt for his/her requirement.

However, if you have any doubts or questions, do let me know in the comment section below. I will try to help you as soon as possible.

Happy Pythoning!

The post 7 Ways in Python to Capitalize First Letter of a String appeared first on Python Pool.

Talk Python to Me: #299 Personal search engine with datasette and dogsheep [Planet Python]

In this episode, we'll be discussing two powerful tools for data reporting and exploration: Datasette and Dogsheep. <br/> <br/> Datasette helps people take data of any shape or size, analyze and explore it, and publish it as an interactive website and accompanying API. <br/> <br/> Dogsheep is a collection of tools for personal analytics using SQLite and Datasette. Imagine a unified search engine for everything personal in your life such as twitter, photos, google docs, todoist, goodreads, and more, all in once place and outside of cloud companies. <br/> <br/> On this episode we talk with Simon Willison who created both of these projects. He's also one of the co-creators of Django and we'll discuss some early Django history!<br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>Datasette</b>: <a href="https://datasette.io/" target="_blank" rel="noopener">datasette.io</a><br/> <b>Dogsheep</b>: <a href="https://dogsheep.github.io/" target="_blank" rel="noopener">dogsheep.github.io</a><br/> <b>Datasheet newsletter</b>: <a href="https://datasette.substack.com/" target="_blank" rel="noopener">datasette.substack.com</a><br/> <b>Video: Build your own data warehouse for personal analytics with SQLite and Datasette</b>: <a href="https://www.youtube.com/watch?v=CPQCD3Qxxik" target="_blank" rel="noopener">youtube.com</a><br/> <br/> <b>Examples</b><br/> <b>List</b>: <a href="https://github.com/simonw/datasette/wiki/Datasettes" target="_blank" rel="noopener">github.com</a><br/> <b>Personal data warehouses</b>: <a href="https://simonwillison.net/2020/Nov/14/personal-data-warehouses/" target="_blank" rel="noopener">github.com</a><br/> <b>Global power plants</b>: <a href="https://global-power-plants.datasettes.com/" target="_blank" rel="noopener">datasettes.com</a><br/> <b>SF data</b>: <a href="https://san-francisco.datasettes.com/" target="_blank" rel="noopener">datasettes.com</a><br/> <b>FiveThirtyEight</b>: <a href="https://fivethirtyeight.datasettes.com/" target="_blank" rel="noopener">fivethirtyeight.datasettes.com</a><br/> <b>Lahman’s Baseball Database</b>: <a href="https://baseballdb.lawlesst.net/" target="_blank" rel="noopener">baseballdb.lawlesst.net</a><br/> <b>Live demo of current main</b>: <a href="https://latest.datasette.io/" target="_blank" rel="noopener">datasette.io</a><br/></div><br/> <strong>Sponsors</strong><br/> <br/> <a href='https://talkpython.fm/linode'>Linode</a><br> <a href='https://talkpython.fm/training'>Talk Python Training</a>

17-01-2021

16:10

Create a MAN page for your own program or script with Pandoc [Linuxtoday.com]

A MAN page is documentation for a software program or script, created in the groff typesetting system - here's an easier way to make one.

Andre Roberge: Friendly contest: the race is on [Planet Python]

tl; dr: Python was wrong ;-)


After one day, I've had one valid entry submitted to the contest I announced yesterday; I've also had two other submissions from the same contributor that I deemed to be invalid for the contest. The submissions shared a similar characteristics to a different degree: the information provided by Python in the exception message did not tell the whole story and, taken on its own, might have been considered to be misleading. 

One such cases which I did not considered to be valid for this contest was the error message UnboundLocalError: local variable 'x' referenced before assignment  given for code containing the following

def f():
x = 1
del x
print(x) # raises the exception

When the exception is raised, there is indeed no trace of variable "x". So while there was an assignment for such a variable before, after deletion it no longer exists. Reading this code, instead of an UnboundLocalError, the exception that should probably be raised at this point is NameError: name 'x' is not defined ; however, Friendly-traceback's role is not to second-guess Python but to explain what an exception means and, whenever possible, give more details using the information available when the exception was raised. I considered this case and another one to be beyond the scope of what Friendly-traceback could accomplish.

The entry that I deemed to be valid was based on the following code:

class F:
__slots__ = ["a"]

f = F()
f.b = 1

The error message given by Python in this case is AttributeError: 'F' object has no attribute 'b'. While technically correct, the problem is not that this object has no such attribute but that it cannot have such an attribute. This information can be easily obtained when the exception is raised and the information provided by Friendly-traceback now includes the following:

The object f has no attribute named b. 
Note that object f uses __slots__ which prevents the creation of new
attributes. The following are some of its known attributes: a.

Reminder: the contest is open for 8 more days.




16-01-2021

18:53

Xfce 4.16 Desktop Lands in openSUSE Tumbleweed [Linuxtoday.com]

Xfce 4.16 brings many goodies for fans of the lightweight desktop environment, including fractional scaling, dark mode for the Panel, CSD support, and more.

Python Pool: cPickle in Python Explained With Examples [Planet Python]

Hello geeks and welcome in this article, we will cover cPickle. Along with that, we will also look at some examples to better understand. We will also see what its application are. But before moving that ahead, let us look at the function’s definition to develop a basic understanding of it.

The cPickle module helps us by implementing an algorithm for turning an arbitrary python object into a series of Bytes. The Pickle module also carries a similar type of program. But the difference between the 2 is that cPickle is much faster and implements the algorithm in C. The only drawback of cPickle over Pickle is that it does not allow the user to subclass from Pickle. In short, we can conclude that cPickle is used with the motive of object serialization. Now in the next section, let us analyze our definition through a bunch of programs.

Application of cPickle

In this section we will see the application cPickle. The first here would be to pickle the data .

– PICKLING THE DATA

import pickle as cPickle
mylist = ['apple', 'bat', 'cat', 'dog']
with open('data.txt', 'wb') as fh:
cPickle.dump(mylist, fh)

Here one thing that I would like to clear is that there is no cPickle module available in the newer python version. So import the pickle module, which will automatically use the cPickle. Now coming back to the explanation of the above example. Here first, we have imported our module. After which, we have declared an array. Next, we have used the command with open and specified our file’s name. Here we have used the “wb” mode instead of “w” as all the operations need to be done using the bytes stream.

cPickle cPickle

If you look at the above 2 images, you can notice that a file named data.txt is added to the left-hand side after successfully running the program. You can also see from the 1st image the outcome for importing the cPickle file. Now we are done with the creation part, but still, we can read nothing as the data is in binary form. Next, we will look at how to extract data from the file.

– Extracting data from pickle

import pickle as cPickle
cPickle_off = open("data.txt", "rb")
file = cPickle.load(cPickle_off)
print(file)
Pickle

Here from the above example, we can note that we have successfully retrieved our pickle data. To achieve this, we have opened our file “data.txt.” Then we have loaded that and eventually printed the result.

cPickle vs Pickle Comparison

This section will discuss the main difference between the cPickle and Pickle Module. The Pickle module is used to serialize and de-serialize the python object. Like the cPickle module, it converts python objects into a byte stream. Which helps in further storing it in the database. The basic difference between them is that cPickle is much faster than Pickle. This is because the cPickle implements the algorithm part in C.

Import error: no module named cPickle

This section will discuss one common error that many of us face when working with the cPickle module. The common error is “no module named cPickle found.” As I have discussed earlier, to avoid such an error, we should import the Pickle module instead of cPickle. What it does is that it automatically imports the C accelerator. So import Pickle and get rid of such errors.

You might be interested in reading:

Conclusion

In this article, we looked at cPickle. We covered its definition and looked at its application through an example. We looked at the process of pickling and then retrieving the data from it. In the end, we can conclude that the cPickle module is used for object serialization.

I hope this article was able to clear all doubts. But in case you have any unsolved queries feel free to write them below in the comment section. Done reading this, why not read about the ogrid function next.

The post cPickle in Python Explained With Examples appeared first on Python Pool.

Andre Roberge: Write bad code to win a prize [Planet Python]

 

Summary

Get a chance of winning a prize by writing code with ONE error that Friendly-traceback cannot properly analyze, in one of three categories:

  • SyntaxError: invalid syntax
  • SyntaxError: some message, where some message is not recognized.
  • Any case of NameError, AttributeError, ModuleNotFoundError, UnboundLocalError, ImportError, IndexError, KeyError that is not recognized or is given an incorrect explanation by Friendly-traceback.

Submitted issues about bugs for Friendly-traceback itself are also considered for this contest.

Links: Github issue

Friendly-traceback documentation

The prize

There will be one prize given drawn randomly from all eligible submissions. The prize consists of one ebook/pbook of your choice with a maximum value of 50 USD (including delivery for pbook) as long as I can order it and have it delivered to you. Alternatively, a donation for that amount to the open source project of your choice if it can be made using Paypal.

The details

Each valid issue will get one entry for the contest. Valid issues contain code that might be expected to be written by a beginner or advanced beginner. It excludes code that uses type annotations as well as the use of async and await keywords.  The code is expected to contain ONE mistake only and not generate secondary exceptions.

The code can be run either using the friendly-console, or running in a Jupyter environment or from an editor as described in the documentation.

For a given valid submission, a bonus entry will be given if a link can be provided to an actual example from a site (such as StackOverflow, /r/python or /r/learnpython, etc.) where a question had been asked prior to this contest.

Exceptions that are not recognized by Friendly-traceback or for which the explanation (in English or French) is wrong or misleading are considered to be valid issues.

Submissions that are considered to be duplicate of previously submitted cases (because they basically have the same cause) will not be considered.

Honor code

I would ask that you do not read the source of Friendly-traceback with the sole intent of finding ways to write code that is designed to lead it to provide incorrect explanations.

End of contest

The contest will end on Monday January 25, at 8 AM Atlantic Standard Time.

11:21

Corona jaagt cybercriminaliteit aan [Computable]

Het aantal geregistreerde online-misdrijven is in 2020 ruim verdubbeld ten opzichte van het jaar ervoor. Volgens de politie is de spectaculaire toename vooral te wijten aan de coronapandemie.

Van Ark zint op ingreep bij ondoorzichtige zorg-ict [Computable]

Er is nog steeds gebrek aan transparantie, data-uitwisselbaarheid en keuzevrijheid in de ict-markt in de zorg. Dit staat innovatie en efficiëntie in de weg, meldt minister Tamara van Ark (Medische Zorg en Sport). Zij wil ingrijpen,...

Meet the New Linux Distro Inspired by the iPad [OMG! Ubuntu!]

JingOS Linux iPadI’ve seen a tonne of Linux distros come and go in the 12 years I’ve been blogging about Ubuntu, but precious few have been designed exclusively for tablet use. So when I came across JingOS, […]

This post, Meet the New Linux Distro Inspired by the iPad is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

15-01-2021

21:26

Dimensys mikt met vers kapitaal op Duitstalige markt [Computable]

De Bossche SAP-partner Dimensys krijgt een kapitaalinjectie van participatiemaatschappij Holland Capital. Het geld is bedoeld voor uitbreiding naar Duitsland, Oostenrijk en Zwitserland. De partijen maken niet bekend hoe groot de investering is.

Fitbit wordt officieel onderdeel van Google [Computable]

De Europese Commissie heeft zijn zegen gegeven voor de overname van wearable-specialist Fitbit door Google. In de VS en Australië lopen nog procedures, waarbij de overheid de overname tegen het licht houdt.

Python Pool: Unboxing the Python Tempfile Module [Planet Python]

Hello geeks and welcome in this article, we will cover Python Tempfile(). Along with that, for an overall better understanding, we will also look at its syntax and parameter. Then we will see the application of all the theory part through a couple of examples. The Tempfile is a standard library used to create temporary files and directories. These kinds of files come really handy when we don’t wish to store data permanently. If we are working with massive data, these files are created with unique names and stored at a default location, varying from os to os. For instance, in windows, the temp folder resides in profile/AppData/Local/Temp while different for other cases.

Creating a Temporary File

import tempfile 
  
file = tempfile.TemporaryFile() 
print(file) 
print(file.name)

Output:

<_io.BufferedRandom name=3>
3

Here, we can see how to create a temporary file using python tempfile(). At first, we have imported the tempfile module, following which we have defined a variable and used our function to create a tempfile. After which, we have used the print statement 2 times 1st to get our file and 2nd to exactly get ourselves the exact filename. The filename is randomly generated and may vary from user to user.

Creating a Named Temporary File

import tempfile

file = tempfile.NamedTemporaryFile()
print(file)
print(file.name)

Output:

<tempfile._TemporaryFileWrapper object at 0x000002756CC7DC40>
C:\Users\KIIT\AppData\Local\Temp\tmpgnp482wy

Here we have to create a named temporary file. The only difference, which is quite evident, is that instead of, Temporary file, we have used NamedTemporaryfile. A random file name is allotted, but it is clearly visible, unlike the previous case. Another thing that can be verified here is the structure profile/AppData/Local/Temp(as mentioned for windows os). As if now, we have seen how to create a temporary file and a named temporary file.

Creating a Temporary Directory

import tempfile
dir = tempfile.TemporaryDirectory() 
print(dir)

Here above we have created a directory. A directory can be defined as a file system structure that contains the location for all the other computer files. Here we can see that there’s just a minute change in syntax when compared to what we were using for creating a temporary file. Here just instead of TemporaryFile, we have used TemporaryDirectory.

Reading and Writing to a Temporary File

import tempfile 
  
file = tempfile.TemporaryFile() 
file.write(b'WELCOME TO PYTHON PPOOL') 
file.seek(0) 
print(file.read()) 
  
file.close()

Output:

b'WELCOME TO PYTHON PPOOL'

Above we can see how to read and write in the temporary files. Here we have first created a temporary file. Following which we have used the write function which is used to write data in a temporary file. You must be wondering what is ‘b’ doing there. The fact is that the temporary files take input by default so ‘b’ out there converts the string into binary. Next, the seek function is used to called to set the file pointer at the starting of the file. Then we have used a read function that reads the content of the temporary file.

Alternative to Python tempfile()

Python tempfile() is great but in this section, we will look at one of the alternatives to it. mkstemp() is a function that does all things you can do with the tempfile but in addition to that, it provides security. Only the user that created that has created the temp file can add data to it. Further, this file doesn’t automatically get deleted when closed.

import tempfile 
   
sec_file = tempfile.mkstemp() 
print(sec_file)

Output:

(3, '/tmp/tmp87gc2pz0')

Here we can how to create a temporary file using mkstemp(). Not much syntax change can be seen here only; instead of TemporaryFile(), we have used mkstemp(), and the rest of everything is the same.

General FAQ’s regarding python tempfile()

1. How to find the path of python tempfile()?

Ans. To get the path of a python tempfile(), you need to create a named tempfile, which has already been discussed. In that case, you get the exact path of the tempfile, as in our case, we get "C:\Users\KIIT\AppData\Local\Temp\tmpgnp482wy".

2. How to perform cleanup for python tempfile()?

Ans. Python itself deletes the tempfile once they are closed.

3. How to create secure python tempfile()?

Ans. In order to create a secure tempfile() we can use the mkstemp() function. As has been discussed in detail already. We know the tempfile created using this can only be edited by creating it. Your permission is also required for someone else to access it.

4. What is the name for python tempfile() ?

Ans. If you create a temporary file, then it has no name, as discussed above. Whereas when you create a Named tempfile, a random name is allocated to it, visible in its path.

Conclusion

In this article, we covered the Python tempfile(). Besides that, we have also looked at creating a temporary file, temporary directory, how to read and write in a temp file, and looked at an alternative called mkstemp(). Here we can conclude that this function helps us create a Temporary file in python.

I hope this article was able to clear all doubts. But in case you have any unsolved queries feel free to write them below in the comment section. Done reading this, why not read about the argpartition function next.

The post Unboxing the Python Tempfile Module appeared first on Python Pool.

Lucas Cimon: Adding content to existing PDFs with fpdf2 [Planet Python]

fpdf2, the library I mentioned in my previous post, cannot parse existing PDF files.

However, other Python libraries can be combined with fpdf2 in order to add new content to existing PDF files.

This page provides several examples of doing so using pdfrw, a great zero-dependency pure Python library dedicated …

Permalink

10:28

Samsung stopt nieuwe smartphones vol ai [Computable]

Samsung heeft zijn nieuwe highend smartphones - de S21-serie - vol kunstmatige intelligentie (artificial intelligence, ai) gestopt. Dankzij een meer geavanceerde 5 nanometer chip (Exynos 2100) is twee keer zo snelle ai mogelijk. Per seconde kan...

Whatsapp-gebruikers wijken uit naar concurrentie [Computable]

Gebruikers stappen massaal over van Whatsapp naar alternatieven als Telegram of Signal. Dit als gevolg van het nieuwe privacybeleid van de chatdienst waarbij gebruikers akkoord moeten gaan met datadeling tussen Whatsapp en moederbedrijf Facebook. Overigens geldt...

Shippeo haalt nog eens 32 miljoen dollar op [Computable]

Het van oorsprong Franse Shippeo, dat (vervoers)bedrijven inzicht geeft in hun logistieke ketting, heeft een kapitaalsinjectie van 32 miljoen dollar gekregen. De investeringsronde werd geleid door Battery Ventures. Ook NGP Capital, ETF Partners en Bpifrance Digital...

BT richt nieuwe technologie-afdeling op [Computable]

De van oorsprong Britse netwerkoperator BT heeft een nieuwe technologische unit opgericht: Digital. Deze tak richt zich op de ontwikkeling en snelle levering van innovatieve producten, platforms en diensten op kerngebieden zoals gezondheid en data. Vanaf...

Grab a Glass, Wine 6.0 Has Been Released [OMG! Ubuntu!]

wine software logoWine 6.0 is the latest stable release of the open source Windows compatibility layer. Find out what's new and improved, and how to install it on Ubuntu.

This post, Grab a Glass, Wine 6.0 Has Been Released is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

Slimbook’s New Linux Gaming Laptop is a Ryzen BEAST [OMG! Ubuntu!]

Slimbook Titan Linux LaptopFor a powerful Linux gaming laptop look no further than the new Slimbook Titan. It's specs leave other Linux laptops in the dust, as you're about to see…

This post, Slimbook’s New Linux Gaming Laptop is a Ryzen BEAST is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

Finally, an Affordable RISC-V Board With Desktop Linux Support [OMG! Ubuntu!]

beaglev riscv computerBeagleV is a cheap RISC-V development board with full Linux kernel support. This post highlights its specs, price, and details how you can buy one.

This post, Finally, an Affordable RISC-V Board With Desktop Linux Support is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

Virtual Machine Startup Shells Closes the Digital Divide One Cloud Computer at a Time [Linux Journal - The Original Magazine of the Linux Community]

Image
Shells Virtual Machine and Cloud Computing

Startup turns devices you probably already own - from smartphones and tablets to smart TVs and game consoles - into full-fledged computers.

Shells (shells.com), a new entrant in the virtual machine and cloud computing space, is excited to launch their new product which gives new users the freedom to code and create on nearly any device with an internet connection.  Flexibility, ease, and competitive pricing are a focus for Shells which makes it easy for a user to start-up their own virtual cloud computer in minutes.  The company is also offering multiple Linux distros (and continuing to add more offerings) to ensure the user can have the computer that they “want” to have and are most comfortable with.

The US-based startup Shells turns idle screens, including smart TVs, tablets, older or low-spec laptops, gaming consoles, smartphones, and more, into fully-functioning cloud computers. The company utilizes real computers, with Intel processors and top-of-the-line components, to send processing power into your device of choice. When a user accesses their Shell, they are essentially seeing the screen of the computer being hosted in the cloud - rather than relying on the processing power of the device they’re physically using.

Shells was designed to run seamlessly on a number of devices that most users likely already own, as long as it can open an internet browser or run one of Shells’ dedicated applications for iOS or Android. Shells are always on and always up to date, ensuring speed and security while avoiding the need to constantly upgrade or buy new hardware.

Shells offers four tiers (Lite, Basic, Plus, and Pro) catering to casual users and professionals alike. Shells Pro targets the latter, and offers a quad-core virtual CPU, 8GB of RAM, 160GB of storage, and unlimited access and bandwidth which is a great option for software engineers, music producers, video editors, and other digital creatives.

Using your Shell for testing eliminates the worry associated with tasks or software that could potentially break the development environment on your main computer or laptop. Because Shells are running round the clock, users can compile on any device without overheating - and allow large compile jobs to complete in the background or overnight. Shells also enables snapshots, so a user can revert their system to a previous date or time. In the event of a major error, simply reinstall your operating system in seconds.

“What Dropbox did for cloud storage, Shells endeavors to accomplish for cloud computing at large,” says CEO Alex Lee. “Shells offers developers a one-stop shop for testing and deployment, on any device that can connect to the web. With the ability to use different operating systems, both Windows and Linux, developers can utilize their favorite IDE on the operating system they need. We also offer the added advantage of being able to utilize just about any device for that preferred IDE, giving devs a level of flexibility previously not available.”

“Shells is hyper focused on closing the digital divide as it relates to fair and equal access to computers - an issue that has been unfortunately exacerbated by the ongoing pandemic,” Lee continues. “We see Shells as more than just a cloud computing solution - it’s leveling the playing field for anyone interested in coding, regardless of whether they have a high-end computer at home or not.”

Follow Shells for more information on service availability, new features, and the future of “bring your own device” cloud computing:

Website: https://www.shells.com

Twitter: @shellsdotcom

Facebook: https://www.facebook.com/shellsdotcom

Instagram: https://www.instagram.com/shellscom

14-01-2021

15:03

Fortinet blijft it'ers gratis scholen in security [Computable]

Fortinet blijft zo’n dertig cursussen op het gebied van netwerkbeveiliging gratis beschikbaar stellen. Deelnemers leren de basisprincipes van cybersecurity en it&#39;ers kunnen zich verdiepen in specifieke Fortinet-producten. De Amerikaanse securityspecialist wil daarmee het tekort aan beveiligingsprofessionals...

Klantenvertellen.nl in handen van Duitse Ekomi [Computable]

De Berlijnse feedbackmanagement-aanbieder Ekomi neemt het in Tilburg gevestigde Klantenvertellen.nl over van het Amsterdamse marketingbureau Youvia. Ook Kiyoh, het beoordelingssysteem voor webshops, maakt onderdeel uit van de overeenkomst. Het overnamebedrag is niet bekendgemaakt.

BNP Paribas kiest voor Orange in Frankrijk [Computable]

BNP Paribas heeft Orange Business Services geselecteerd om een sd-wan-oplossing te implementeren in meer dan 1.800 bankfilialen in Frankrijk. De waarde van het contract blijft in het midden.

Pegasystems neemt Qurious.io over [Computable]

De Amerikaanse crm-specialist Pegasystems heeft de overname van Qurious.io bekendgemaakt. Die bedrijf bouwt een cloudoplossing op basis van kunstmatige intelligentie waarmee klantenserviceteams spraak in realtime kunnen analyseren.

11:10

Red Hat heeft oogje op securitybedrijf StackRox [Computable]

Opensource-specialist Red Hat heeft aangekondigd dat het StackRox wil inlijven, een securitybedrijf dat zich bezighoudt met container- en Kubernetes-beveiliging. Hoeveel Red ervoor op tafel wil leggen, is niet bekendgemaakt.

Ubuntu is Making the ‘Home’ Folder Private in 21.04 [OMG! Ubuntu!]

photo of a locked door unsplashThe home folder on future Ubuntu installs with have much tighter permissions by default. We look at the reasons Ubuntu devs have chosen to do this now.

This post, Ubuntu is Making the ‘Home’ Folder Private in 21.04 is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

13-01-2021

18:09

Mozilla VPN is Now Available to Mac & Linux Users [OMG! Ubuntu!]

Mozilla VPNMozilla VPN now supports Mac and Linux. The subscription-based privacy service launched in 2020 but only for Windows, Android and iOS.

This post, Mozilla VPN is Now Available to Mac & Linux Users is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

11-01-2021

18:08

GNOME 40 Fixes the Issue of Truncated App Nam… [OMG! Ubuntu!]

generic gnome logoI know you're thinking "Joey, you've been here before", but this time it's different. Code has been committed and merged. A fix is finally happening.

This post, GNOME 40 Fixes the Issue of Truncated App Nam… is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

gThumb 3.11.2 Released with Minor Improvements [OMG! Ubuntu!]

gThumb screenshotA new version of the gThumb iPhoto manager and app viewer is available to download. In this post we look at what's new in the gThumb 3.11.2 release.

This post, gThumb 3.11.2 Released with Minor Improvements is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

10-01-2021

20:42

Firefox 86 Will Support Next-Gen Image Format by Default [OMG! Ubuntu!]

Mozilla Firefox LogoFirefox 86 features native support for AVIF image files across all major operating systems. This lightweight image format is up to 50% smaller than JPEG.

This post, Firefox 86 Will Support Next-Gen Image Format by Default is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

08-01-2021

17:18

GNOME’s Bold New Look is Beginning to Take Shape [OMG! Ubuntu!]

Major GNOME Shell design changes are coming, but not everyone is please. GNOME devs share an update on their progress and urge users to 'wait' to try it.

This post, GNOME’s Bold New Look is Beginning to Take Shape is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

07-01-2021

20:40

15 Things I Did Post Ubuntu 19.04 Installation [Tech Drive-in]

Ubuntu 19.04, codenamed "Disco Dingo", has been released (and upgrading is easier than you think). I've been on Ubuntu 19.04 since its first Alpha, and this has been a rock solid release as far I'm concerned. Changes in Ubuntu 19.04 are more evolutionary though, but availability of the latest Linux Kernel version 5.0 is significant.

ubuntu 19.04 things to do after install

Unity is long gone and Ubuntu 19.04 is indistinguishably GNOME 3.x now, which is not necessarily a bad thing. Yes, I know, there are many who still swear by the simplicity of Unity desktop. But I'm an outlier here, I liked both Unity and GNOME 3.x even in their very early avatars. When I wrote this review of GNOME Shell desktop almost 8 years ago, I knew it was destined for greatness. Ubuntu 19.04 "Disco Dingo" runs GNOME 3.32.0.


We'll discuss more about GNOME 3.x and Ubuntu 19.04 in the official review. Let's get down to brass tacks. A step-by-step guide into things I did after installing Ubuntu 19.04 "Disco Dingo". 

1. Make sure your system is up-to-date

Do a full system update. Fire up your Software Updater and check for updates.

how to update ubuntu 19.04

OR
via Terminal, this is my preferred way to update Ubuntu. Just one command.

sudo apt update && sudo apt dist-upgrade

Enter password when prompted and let the system do the rest.

2. Install GNOME Tweaks

GNOME Tweaks is non-negotiable.

things to do after installing ubuntu 19.04

GNOME Tweaks is an app the lets you tweak little things in GNOME based OSes that are otherwise hidden behind menus. If you are on Ubuntu 19.04, Tweaks is a must. Honestly, I don't remember if it was installed as a default. But here you install it anyway, Apt-URL will prompt you if the app already exists.

Search for Gnome Tweaks in Ubuntu Software Center. OR simply CLICK HERE to go straight to the app in Software Center. OR even better, copy-paste this command in Terminal (keyboard shortcut: CTRL+ALT+T).

sudo apt install gnome-tweaks

3. Enable MP3/MP4/AVI Playback, Adobe Flash etc.

You do have an option to install most of the 'restricted-extras' while installing the OS itself now, but if you are not-sure you've ticked all the right boxes, just run the following command in Terminal.

sudo apt install ubuntu-restricted-extras

OR

You can install it straight from the Ubuntu Software Center by CLICKING HERE.

4. Display Date/Battery Percentage on Top Panel  

The screenshot, I hope, is self explanatory.

things to do after installing ubuntu 19.04

If you have GNOME Tweaks installed, this is easily done. Open GNOME tweaks, goto 'Top Bar' sidemenu and enable/disable what you need.

5. Enable 'Click to Minimize' on Ubuntu Dock

Honestly, I don't have a clue why this is disabled by default. You intuitively expect the apps shortcuts on Ubuntu dock to 'minimize' when you click on it (at least I do).

In fact, the feature is already there, all you need to do is to switch it ON. Do this is Terminal.

gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize'

That's it. Now if you didn't find the 'click to minimize' feature useful, you can always revert Dock settings back to its original state, by copy-pasting the following command in Terminal app.

gsettings reset org.gnome.shell.extensions.dash-to-dock click-action

6. Pin/Unpin Apps from Launcher

There are a bunch of apps that are pinned to your Ubuntu launcher by default.

things to do after ubuntu 19.04
 
For example, I almost never use the 'Help' app or the 'Amazon' shortcut preloaded on launcher. But I would prefer a shortcut to Terminal app instead. Right-click on your preferred app on the launcher, and add-to/remove-from favorites as you please.

7. Enable GNOME Shell Exetensions Support

Extensions are an integral part of GNOME desktop.

It's a real shame that one has to go through all these for such a basic yet important feature. From the default Firefox browser, when you visit GNOME Extensions page, you will notice the warning message on top describing the unavailability of Extensions support.
Now for the second part, you need to install the host connector on Ubuntu.
sudo apt install chrome-gnome-shell
  • Done. Don't mind the "chrome" in 'chrome-gnome-shell', it works with all major browsers, provided you've the correct browser add-on installed. 
  • You can now visit GNOME Extensions page and install extensions as you wish with ease. (if it didn't work immediately, a system restart will clear things up). 
Extensions are such an integral part of GNOME Desktop experience, can't understand why this is not a system default in Ubuntu 19.04. Hope future releases of Ubuntu will have this figured out.

8. My Favourite 5 GNOME Shell Extensions for Ubuntu 19.04


9. Remove Trash Icon from Desktop

Annoyed by the permanent presence of Home and Trash icons in the desktop? You are not alone. Luckily, there's an extension for that!
Done. Now, access the settings and enable/disable icons as you please. 


Extension settings can be accessed directly from the extension home page (notice the small wrench icon near the ON/OFF toggle). OR you can use the Extensions addon like in the screenshot above.

10. Enable/Disable Two Finger Scrolling

As you must've noticed, two-finger scrolling is a system default for sometime now. 

things to do after installing ubuntu cosmic
 
One of my laptops act strangely when two-finger scrolling is on. You can easily disable two-finger scrolling and enable old school edge-scrolling in 'Settings'.  Settings > Mouse and Touchpad

Quicktip: You can go straight to submenus by simply searching for it in GNOME's universal search bar.

ubuntu 19.04 disco

Take for example the screenshot above, where I triggered the GNOME menu by hitting Super(Windows) key, and simply searched for 'mouse' settings. The first result will take me directly to the 'Settings' submenu for 'Mouse and Touchpad' that we saw earlier. Easy right? More examples will follow.

11. Nightlight Mode ON

When you're glued to your laptop/PC screen for a large amount of time everyday, it is advisable that you enable the automatic nightlight mode for the sake of your eyes. Be it the laptop or my phone, this has become an essential feature. The sight of a LED display without nightlight ON during lowlight conditions immediately gives me a headache these days. Easily one of my favourite in-built features on GNOME.


Settings > Devices > Display > Night Light ON/OFF

things to do after installing ubuntu 19.04

OR as before, Hit superkey > search for 'night light'. It will take you straight to the submenu under Devices > Display. Guess you wouldn't need anymore examples on that.

things to do after installing ubuntu 19.04

12. Privacy on Ubuntu 19.04

Guess I don't need to lecture you on the importance of privacy in the post-PRISM era.

ubuntu 19.04 privacy

Ubuntu remembers your usage & history to recommend you frequently used apps and such. And this is never shared over the network. But if you're not comfortable with this, you can always disable and delete your usage history on Ubuntu. Settings > Privacy > Usage & History 

13. Perhaps a New Look & Feel?

As you might have noticed, I'm not using the default Ubuntu theme here.

themes ubuntu 19.04

Right now I'm using System 76's Pop OS GTK theme and icon sets. They look pretty neat I think. Just three commands to install it in your Ubuntu 19.04.

sudo add-apt-repository ppa:system76/pop
sudo apt-get update
sudo apt install pop-icon-theme pop-gtk-theme pop-gnome-shell-theme
sudo apt install pop-wallpapers

Execute last command if you want Pop OS wallpapers as well. To enable the newly installed theme and icon sets, launch GNOME Tweaks > Appearance (see screenshot). I will be making separate posts on themes, icon sets and GNOME shell extensions. So stay subscribed. 

14. Disable Error Reporting

If you find the "application closed unexpectedly" popups annoying, and would like to disable error reporting altogether, this is what you need to do.


Settings > Privacy > Problem Reporting and switch it off. 

15. Liberate vertical space on Firefox by disabling Title Bar

This is not an Ubuntu specific tweak.


Firefox > Settings > Customize. Notice the "Title Bar" at the bottom left? Untick to disable.

Follow us on Facebook, and Twitter.

Look up Uber Time, Price Estimates on Terminal with Uber CLI [Tech Drive-in]

The worldwide phenomenon that is Uber needs no introduction. Uber is an immensely popular ride sharing, ride hailing, company that is valued in billions. Uber is so disruptive and controversial that many cities and even countries are putting up barriers to protect the interests of local taxi drivers.

Enough about Uber as a company. To those among you who regularly use Uber app for booking a cab, Uber CLI could be a useful companion.


Uber CLI can be a great tool for the easily distracted. This unique command line application allows you to look up Uber cab's time and price estimates without ever taking your eyes off the laptop screen.

Install Uber-CLI using NPM

You need to have NPM first to install Uber-CLI on Ubuntu. npm, short for Node.js package manager, is a package manager for the JavaScript programming language. It is the default package manager for the JavaScript runtime environment Node.js. npm has a command line based client and its own repository of packages.

This is how to install npm on Ubuntu 19.04, and Ubuntu 18.10. And thereafter, using npm, install Uber-CLI. Fire up the Terminal and run the following.

sudo apt update
sudo apt install nodejs npm
npm install uber-cli -g

And you're done. Uber CLI is a command line based application, here are a few examples of how it works in Terminal. Also, since Uber is not available where I live, I couldn't vouch for its accuracy.


Uber-CLI has just two use cases.
uber time 'pickup address here'
uber price -s 'start address' -e 'end address'
Easy right? I did some testing with places and addresses I'm familiar with, where Uber cabs are fairly common. And I found the results to be fairly accurate. Do test and leave feedback. Uber CLI github page for more info.

Retro Terminal that Emulates Old CRT Display (Ubuntu 18.10, 18.04 PPA) [Tech Drive-in]

We've featured cool-retro-term before. It is a wonderful little terminal emulator app on Ubuntu (and Linux) that adorns this cool retro look of the old CRT displays.

Let the pictures speak for themselves.

retro terminal ubuntu ppa

Pretty cool right? Not only does it look cool, it functions just like a normal Terminal app. You don't lose out on any features normally associated with a regular Terminal emulator. cool-retro-term comes with a bunch of themes and customisations that takes its retro cool appeal a few notches higher.

cool-old-term retro terminal ubuntu linux

Enough now, let's find out how you install this retro looking Terminal emulator on Ubuntu 18.04 LTS, and Ubuntu 18.10. Fire up your Terminal app, and run these commands one after the other.

sudo add-apt-repository ppa:vantuz/cool-retro-term
sudo apt update
sudo apt install cool-retro-term

Done. The above PPA supports Ubuntu Artful, Bionic and Cosmic releases (Ubuntu 17.10, 18.04 LTS, 18.10). cool-retro-term is now installed and ready to go.


Since I don't have Artful or Bionic installations in any of my computers, I couldn't test the PPA on those releases. Do let me know if you faced any issues while installing the app.

And as some of you might have noticed, I'm running cool-retro-term from an AppImage. This is because I'm on Ubuntu 19.04 "disco dingo", and obviously the app doesn't support an unreleased OS (well, duh!).

retro terminal ubuntu ppa

This is how it looks on fullscreen mode. If you are a non-Ubuntu user, you can find various download options here. If you are on Fedora or distros based on it, cool-retro-term is available in the official repositories.

Komorebi Wallpapers display Live Time & Date, Stunning Parallax Effect on Ubuntu [Tech Drive-in]

Live wallpapers are not a new thing. In fact we have had a lot of live wallpapers to choose from on Linux 10 years ago. Today? Not so much. In fact, be it GNOME or KDE, most desktops today are far less customizable than it used to be. Komorebi wallpaper manager for Ubuntu is kind of a way back machine in that sense.

ubuntu live wallpaper

Install Gorgeous Live Wallpapers in Ubuntu 18.10/18.04 using Komorebi

Komorebi Wallpaper Manager comes with a pretty neat collection of live wallpapers and even video wallpapers. The package also contains a simple tool to create your own live wallpapers.


Komorebi comes packaged in a convenient 64-bit DEB package, making it super easy to install in Ubuntu and most Debian based distros (latest version dropped 32-bit support though).  
ubuntu 18.10 live wallpaper

That's it! Komorebi is installed and ready to go! Now launch Komorebi from app launcher.

ubuntu komorebi live wallpaper

And finally, to uninstall Komorebi and revert all the changes you made, do this in Terminal (CTRL+ALT+T).

sudo apt remove komorebi

Komorebi works great on Ubuntu 18.10, and 18.04 LTS. A few more screenshots.

komorebi live wallpaper ubuntu

As you can see, live wallpapers obviously consume more resources than a regular wallpaper, especially when you switch on Komorebi's fancy video wallpapers. But it is definitely not a resource hog as I feared it would be.

ubuntu wallpaper live time and date

Like what you see here? Go ahead and give Komorebi Wallpaper Manager a spin. Does it turn out to be not as resource-friendly in your PC? Let us know your opinion in the comments. 

ubuntu live wallpapers

A video wallpaper example. To see them in action, watch this demo.

Snap Install Mario Platformer on Ubuntu 18.10, Ubuntu 18.04 LTS [Tech Drive-in]

Nintendo's Mario needs no introduction. This game defined our childhoods. Now you can install and have fun with an unofficial version of the famed Mario platformer in Ubuntu 18.10 via this Snap package.

install Mario on Ubuntu

Play Nintendo's Mario Unofficially on Ubuntu 18.10

"Mari0 is a Mario + Portal platformer game." It is not an official release and hence the slight name change (Mari0 instead of Mario). Mari0 is still in testing, and might not work as intended. It doesn't work fullscreen for example, but everything else seems to be working great in my PC.

But please be aware that this app is still in testing, and a lot of things can go wrong. Mari0 also comes with joystick support. Here's how you install unofficial Mari0 snap package. Do this in Terminal (CTRL+ALT+T)

sudo snap install mari0

To enable joystick support:

sudo snap connect mari0:joystick

nintendo mario ubuntu

Please find time to provide valuable feedback to the developer post testing, especially if something went wrong. You can also leave your feedback in the comments below.

Oranchelo - The icon theme to beat on Ubuntu 18.10 [Tech Drive-in]

OK, that might be an overstatement. But Oranchelo is good, really good.


Oranchelo Icons Theme for Ubuntu 18.10

Oranchelo is a flat-design icon theme originally designed for XFCE4 desktop. Though it works great on GNOME as well. I especially like the distinct take on Firefox and Chromium icons, as you can see in the screenshot.



Here's how you install Oranchelo icons theme on Ubuntu 18.10 using Oranchelo PPA. Just copy-paste the following three commands to Terminal (CTRL+ALT+T).

sudo add-apt-repository ppa:oranchelo/oranchelo-icon-theme
sudo apt update
sudo apt install oranchelo-icon-theme

Now run GNOME Tweaks, Appearance > Icons > Oranchelo.


Meet the artist behind Oranchelo icons theme at his deviantart page. So, how do you like the new icons? Let us know your opinion in the comments below.


11 Things I did After Installing Ubuntu 18.10 Cosmic Cuttlefish [Tech Drive-in]

Have been using "Cosmic Cuttlefish" since its first beta. It is perhaps one of the most visually pleasing Ubuntu releases ever. But more on that later. Now let's discuss what can be done to improve the overall user-experience by diving deep into the nitty gritties of Canonical's brand new flagship OS.

1. Enable MP3/MP4/AVI Playback, Adobe Flash etc.

This has been perhaps the standard 'first-thing-to-do' ever since the Ubuntu age dawned on us. You do have an option to install most of the 'restricted-extras' while installing the OS itself now, but if you are not-sure you've ticked all the right boxes, just run the following command in Terminal.

sudo apt install ubuntu-restricted-extras

OR

You can install it straight from the Ubuntu Software Center by CLICKING HERE.

2. Get GNOME Tweaks

GNOME Tweaks is non-negotiable.

things to do after installing ubuntu 18.10

GNOME Tweaks is an app the lets you tweak little things in GNOME based OSes that are otherwise hidden behind menus. If you are on Ubuntu 18.10, Tweaks is a must. Honestly, I don't remember if it was installed as a default. But here you install it anyway, Apt-URL will prompt you if the app already exists.


Search for Gnome Tweaks in Ubuntu Software Center. OR simply CLICK HERE to go straight to the app in Software Center. OR even better, copy-paste this command in Terminal (keyboard shortcut: CTRL+ALT+T).

sudo apt install gnome-tweaks

3. Displaying Date/Battery Percentage on Top Panel  

The screenshot, I hope, is self explanatory.

things to do after installing ubuntu 18.10

If you have GNOME Tweaks installed, this is easily done. Open GNOME tweaks, goto 'Top Bar' sidemenu and enable/disable what you need.

4. Enable 'Click to Minimize' on Ubuntu Dock

Honestly, I don't have a clue why this is disabled by default. You intuitively expect the apps shortcuts on Ubuntu dock to 'minimize' when you click on it (at least I do).

In fact, the feature is already there, all you need to do is to switch it ON. Do this is Terminal.

gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize'

That's it. Now if you didn't find the 'click to minimize' feature useful, you can always revert Dock settings back to its original state, by copy-pasting the following command in Terminal app.

gsettings reset org.gnome.shell.extensions.dash-to-dock click-action

5. Pin/Unpin Useful Stuff from Launcher

There are a bunch of apps that are pinned to your Ubuntu launcher by default.

things to do after ubuntu 18.10
 
For example, I almost never use the 'Help' app or the 'Amazon' shortcut preloaded on launcher. But I would prefer a shortcut to Terminal app instead. Right-click on your preferred app on the launcher, and add-to/remove-from favorites as you please.

6. Enable/Disable Two Finger Scrolling

As you must've noticed, two-finger scrolling is a system default now. 

things to do after installing ubuntu cosmic
 
One of my laptops act strangely when two-finger scrolling is on. You can easily disable two-finger scrolling and enable old school edge-scrolling in 'Settings'.  Settings > Mouse and Touchpad

Quicktip: You can go straight to submenus by simply searching for it in GNOME's universal search bar.

ubuntu 18.10 cosmic

Take for example the screenshot above, where I triggered the GNOME menu by hitting Super(Windows) key, and simply searched for 'mouse' settings. The first result will take me directly to the 'Settings' submenu for 'Mouse and Touchpad' that we saw earlier. Easy right? More examples will follow.

7. Nightlight Mode ON

When you're glued to your laptop/PC screen for a large amount of time everyday, it is advisable that you enable the automatic nightlight mode for the sake of your eyes. Be it the laptop or my phone, this has become an essential feature. The sight of a LED display without nightlight ON during lowlight conditions immediately gives me a headache these days. Easily one of my favourite in-built features on GNOME.


Settings > Devices > Display > Night Light ON/OFF

things to do after installing ubuntu 18.10

OR as before, Hit superkey > search for 'night light'. It will take you straight to the submenu under Devices > Display. Guess you wouldn't need anymore examples on that.

things to do after installing ubuntu 18.10

8. Safe Eyes App for Ubuntu

A popup that will fill the entire screen and forces you to take your eyes off it.

apps for ubuntu 18.10

Apart from enabling the nighlight mode, Safe Eyes is another app I strongly recommend to those who stare at their laptops for long periods of time. This nifty little app forces you to take your eyes off the computer screen and do some standard eye-exercises at regular intervals (which you can change).

things to do after installing ubuntu 18.10

Installation is pretty straight forward. Just these 3 commands on your Terminal.

sudo add-apt-repository ppa:slgobinath/safeeyes
sudo apt update
sudo apt install safeeyes

9. Privacy on Ubuntu 18.10

Guess I don't need to lecture you on the importance of privacy in the post-PRISM era.

ubuntu 18.10 privacy

Ubuntu remembers your usage & history to recommend you frequently used apps and such. And this is never shared over the network. But if you're not comfortable with this, you can always disable and delete your usage history on Ubuntu. Settings > Privacy > Usage & History 

10. Perhaps a New Look & Feel?

As you might have noticed, I'm not using the default Ubuntu theme here.

themes ubuntu 18.10

Right now I'm using System 76's Pop OS GTK theme and icon sets. They look pretty neat I think. Just three commands to install it in your Ubuntu 18.10.

sudo add-apt-repository ppa:system76/pop
sudo apt-get update
sudo apt install pop-icon-theme pop-gtk-theme pop-gnome-shell-theme
sudo apt install pop-wallpapers

Execute last command if you want Pop OS wallpapers as well. To enable the newly installed theme and icon sets, launch GNOME Tweaks > Appearance (see screenshot). I will be making separate posts on themes, icon sets and GNOME shell extensions. So stay subscribed. 

11. Disable Error Reporting

If you find the "application closed unexpectedly" popups annoying, and would like to disable error reporting altogether, this is what you need to do.

sudo gedit /etc/default/apport

This will open up a text editor window which has only one entry: "enabled=1". Change the value to '0' (zero) and you have Apport error reporting completely disabled.


Follow us on Facebook, and Twitter

How to Upgrade from Ubuntu 18.04 LTS to 18.10 'Cosmic Cuttlefish' [Tech Drive-in]

One day left before the final release of Ubuntu 18.10 codenamed "Cosmic Cuttlefish". This is how you make the upgrade from Ubuntu 18.04 to 18.10.

Upgrade to Ubuntu 18.10 from 18.04

Ubuntu 18.10 has a brand new look!
As you can see from the screenshot, a lot has changed. Ubuntu 18.10 arrives with a major theme overhaul. After almost a decade, the default Ubuntu GTK theme ("Ambiance") is being replaced with a brand new one called "Yaru". The new theme is based heavily on GNOME's default "Adwaita" GTK theme. More on that later.

Upgrade from Ubuntu 18.04 LTS to 18.10
If you're on Ubuntu 18.04 LTS, upgrading to 18.10 "cosmic" is a pretty straight forward affair. Since 18.04 is a long-term support (LTS) release (meaning the OS will get official updates for about 5 years), it may not prompt you with an upgrade option when 18.10 finally arrives. 

So here's how it's done. Disclaimer: back up your critical data before going forward. And better don't try this on mission critical machines. You're on LTS anyway.
  • An up-to-date Ubuntu 18.04 LTS is the first step. Do the following in Terminal.
$ sudo apt update && sudo apt dist-upgrade
$ sudo apt autoremove
  • The first command will check for updates and then proceed with upgrading your Ubuntu 18.04 LTS with the latest updates. The "autoremove" command will clean up any and all dependencies that were installed with applications, and are no longer required.
  • Now the slightly tricky part. You need to edit the /etc/update-manager/release-upgrades file and change the Prompt=never entry to Prompt=normal  or else it will give a "no release found" error message. 
  • I used Vim to make the edit. But for the sake of simplicity, let's use gedit. 
$ sudo gedit /etc/update-manager/release-upgrades
  • Make the edit and save the changes. Now you are ready to go ahead with the upgrade. Make sure your laptop is plugged-in, this will take time. 
  • To be on the safer side, please make sure that there's at least 5GB of disk space left in your home partition (it will prompt you and exit if you don't have enough space required for the upgrade). 
$ sudo do-release-upgrade -d
  • That's it. Wait for a few hours and let it do its magic. 
My upgrade to Ubuntu 18.10 was uneventful. Nothing broke and it all worked like a charm. After the upgrade is done, you're probably still stuck with your old theme. Fire up "Gnome Tweaks" app (get it from App Store if you already haven't), and change the theme and the icons to "Yaru". 

10:18

Linux Mint 20.1 Available to Download, This is What’s New [OMG! Ubuntu!]

Linux Mint 20.1 is now available to download. Learn more about the new releases, including its new features, new apps, and key changes in this blog post.

This post, Linux Mint 20.1 Available to Download, This is What’s New is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

An Introduction to Linux Gaming thanks to ProtonDB [Linux Journal - The Original Magazine of the Linux Community]

An Introduction to Linux Gaming thanks to ProtonDB

Video Games On Linux? 

In this article, the newest compatibility feature for gaming will be introduced and explained for all you dedicated video game fanatics. 

Valve releases its new compatibility feature to innovate Linux gaming, included with its own community of play testers and reviewers.

In recent years we have made leaps and strides on making Linux and Unix systems more accessible for everyone. Now we come to a commonly asked question, can we play games on Linux? Well, of course! And almost, let me explain. 

Proton compatibility layer for Steam client 

With the rising popularity of Linux systems, valve is going ahead of the crowd yet again with proton for their steam client (computer program that runs your purchased games from Steam). Proton is a variant of Wine and DXVK that lets Microsoft Games run on Linux operating systems. Proton is backed by Valve itself and can easily be added to any steam account for Linux gaming, through an integration called "Steam Play." 

Lately, there has been a lot of controversy as Microsoft is rumored to someday release its own app store and disable downloading software online. In response, many companies and software developers are pressured to find a new "haven" to share content with the internet. Proton might be Valve's response to this and is working to make more of its games accessible to Linux users. 

Activating Proton with Steam Play 

Proton is integrated into the Steam Client with "Steam Play." To activate proton, go into your steam client and click on Steam in the upper right corner. Then click on settings to open a new window.

Linux Gaming Steamplay
Steam Client's settings window

 

From here, click on the Steam Play button at the bottom of the panel. Click "Enable Steam Play for Supported Titles." After, it will ask you to restart steam, click yes and you are ready to play after the restart.

Your computer will now play all of steam's whitelisted games seamlessly. But, if you would like to try other games that are not guaranteed to work on Linux, then click "Enable Steam Play for All Other Titles."

What Happens if a Game has Issues?

Don't worry, this can and will happen for games that are not in Steam's whitelisted games archive. But, there is help for you online on steam and in proton's growing community. Be patient and don't give up! There will always be a solution out there.

06-01-2021

17:56

New Wallpaper for KDE Plasma 5.21 Revealed [OMG! Ubuntu!]

KDE Plasma 5.21 with the new Milky Way WallpaperKDE developers reveal the new desktop wallpaper that will ship in KDE Plasma 5.21 this spring. The background is called "Milky Way" and looks like this…

This post, New Wallpaper for KDE Plasma 5.21 Revealed is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

05-01-2021

02-01-2021

16:06

KDE Tease ‘Production Ready’ Wayland Support, New App Menu in 2021 [OMG! Ubuntu!]

kde kog logoKDE Plasma users can look forward to several big changes this year, including improved Wayland session, fingerprint support, and a new app menu.

This post, KDE Tease ‘Production Ready’ Wayland Support, New App Menu in 2021 is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

Cawbird 1.3 Released with Improved DM Support, Video Uploading [OMG! Ubuntu!]

twitter logoCawbird 1.3 is available to download. The latest version of this GTK Twitter client for Linux desktops includes a number improvements to direct messaging.

This post, Cawbird 1.3 Released with Improved DM Support, Video Uploading is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

30-12-2020

17:33

How to Enable ‘Fuzzy Search’ in GNOME Shell’s Applications Screen [OMG! Ubuntu!]

Fuzzy Search on GNOME ShellAdd fuzzy search to GNOME Shell using this free GNOME extension. It returns fuzzy matching app results in the GNOME Shell applications screen.

This post, How to Enable ‘Fuzzy Search’ in GNOME Shell’s Applications Screen is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

29-12-2020

10:23

5 Best Linux Distro Releases of 2020, Including Fedora, Manjaro & Pop!_OS [OMG! Ubuntu!]

Best Linux Distros of 2020Here are the best Linux distros of 2020 according to readers of this site. Their selection includes the latest Ubuntu LTS release, Manjaro, and more.

This post, 5 Best Linux Distro Releases of 2020, Including Fedora, Manjaro & Pop!_OS is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

28-12-2020

27-12-2020

25-12-2020

11:33

Darktable 3.4 Gives Open Source Photographers New Toys to Play With [OMG! Ubuntu!]

Darktable 3.4 has been released. This update to the open source Adobe Lightroom alternative adds new features, GUI changes, and better CPU performance.

This post, Darktable 3.4 Gives Open Source Photographers New Toys to Play With is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

How To Use GUI LVM Tools [Linux Journal - The Original Magazine of the Linux Community]

GUI LVM Tools

The LVM is a powerful storage management module which is included in all the distributions of Linux now. It provides users with a variety of valuable features to fit different requirements. The management tools that come with LVM are based on the command line interface, which is very powerful and suitable for automated/batch operations. But LVM's operations and configuration are quite complex because of its own complexity. So many software companies including Red Hat have launched some GUI-based LVM tools to help users manage LVM more easily. Let’s review them here to see the similarities and differences between individual tools.

system-config-lvm (alternate name LVM GUI)

Provider: Red Hat

The system-config-lvm is the first GUI LVM tool which was originally released as part of Red Hat Linux. It is also called LVM GUI because it is the first one. Later, Red Hat also created an installation package for it. So system-config-lvm is able to be used in other Linux distributions. The installation package includes RPM packages and DEB packages.

The main panel of system-config-lvm

The main panel of system-config-lvm

The system-config-lvm only supports lvm-related operations. Its user interface is divided into three parts. The left part is tree view of disk devices and LVM devices (VGs); the middle part is the main view which shows VG usage, divided into LV and PV columns.

There are zoom in/zoom out buttons in the main view to control display ratio, but it is not enough for displaying complex LVM information.The right part displays details of the selected related objects (PV/LV/VG).

The different versions of system-config-lvm are not completely consistent in the organized way of devices. Some of them show both LVM devices and non-lvm devices (disk), the others show LVM devices only. I have tried two versions, one shows LVM devices existing in the system, namely PV/VG/LV only, no other devices; The other can display non-lvm disks and PV can be removed in disk view.

The version which shows non-lvm disks

The version which shows non-lvm disks

Supported operations

PV Operations

  • Delete PV
  • Migrate PV

VG Operations

  • Create VG
  • Append PV to VG/Remove PV from VG
  • Delete VG (Delete last PV in VG)

LV Operations

24-12-2020

10:58

Xfce 4.16 Released with New Features & Visual Changes [OMG! Ubuntu!]

xfce 4.16 fractional scalingXfce 4.16 has been released. This version of the Linux desktop environment gains several new features, usability tweaks, and a roster of visual changes.

This post, Xfce 4.16 Released with New Features & Visual Changes is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

23-12-2020

15:47

This $80 Games Console Looks Like a Switch, But Runs Ubuntu [OMG! Ubuntu!]

odroid go super ubuntuDig the look of the Nintendo Switch but prefer gaming on Linux? Meet the ODroid Go Super, an $80 ARM-powered games console that runs Ubuntu.

This post, This $80 Games Console Looks Like a Switch, But Runs Ubuntu is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

22-12-2020

12:35

Kdenlive 20.12 Released, Adds Several New Features, New Video Effect [OMG! Ubuntu!]

kdenlive beta releaseKdenlive 20.12 offers a new subtitle editor, same track transition support, and a vertical video effect. We recap these and other changes in this release.

This post, Kdenlive 20.12 Released, Adds Several New Features, New Video Effect is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

21-12-2020

12:13

Major Design Changes Planned for GNOME 40 [OMG! Ubuntu!]

GNOME LOGOGNOME developers have unveiled an ambitious set of UX changes they hope to implement in GNOME 40. We take a closer look at the proposed redesign.

This post, Major Design Changes Planned for GNOME 40 is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

17-12-2020

21:02

The Document Foundation releases LibreOffice 7.0.4 [Press Releases – The Document Foundation Blog]

Berlin, December 17, 2020 – LibreOffice 7.0.4, the fourth minor release of the LibreOffice 7.0 family, is available from https://www.libreoffice.org/download/. All users are invited to update to this version, as the LibreOffice 6.4 family won’t be updated, having reached end-of-life. LibreOffice 7.0.4 includes over 110 bug fixes and improvements to document compatibility.

LibreOffice offers the highest level of compatibility in the office suite arena, starting from native support for the OpenDocument Format (ODF) – with better security and interoperability features – to wide support for proprietary formats. End user support is provided by volunteers via email and online resources: https://www.libreoffice.org/get-help/community-support/. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help us to make all of these resources available.

For enterprise class deployments, TDF strongly recommends sourcing LibreOffice from one of the ecosystem partners, to get long-term supported releases, dedicated assistance, custom new features and other benefits, including SLAs (Service Level Agreements): https://www.libreoffice.org/download/libreoffice-in-business/.

Support for migrations and training should be sourced from certified professionals who provide value-added services which extend the reach of the community to the corporate world. Also, the work done by ecosystem partners flows back into the LibreOffice project, and this represents an advantage for everyone.

LibreOffice 7.0.4 change log pages are available on TDF’s wiki: https://wiki.documentfoundation.org/Releases/7.0.4/RC1 (changed in RC1) and https://wiki.documentfoundation.org/Releases/7.0.4/RC2 (changed in RC2). All versions of LibreOffice are built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org.

LibreOffice users are invited to join the community at https://ask.libreoffice.org, where they can get and provide user-to-user support. People willing to contribute their time and professional skills to the project can visit the dedicated website at https://whatcanidoforlibreoffice.org.

LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools at https://www.libreoffice.org/donate.

Boost Up Productivity in Bash - Tips and Tricks [Linux Journal - The Original Magazine of the Linux Community]

Bash Tips and Tricks

Introduction

When spending most of your day around bash shell, it is not uncommon to waste time typing the same commands over and over again. This is pretty close to the definition of insanity.

Luckily, bash gives us several ways to avoid repetition and increase productivity.

Today, we will explore the tools we can leverage to optimize what I love to call “shell time”.

Aliases

Bash aliases are one of the methods to define custom or override default commands.

You can consider an alias as a “shortcut” to your desired command with options included.

Many popular Linux distributions come with a set of predefined aliases.

Let’s see the default aliases of Ubuntu 20.04, to do so simply type “alias” and press [ENTER].

Bash Tips and Tricks 1

By simply issuing the command “l”, behind the scenes, bash will execute “ls -CF”.

It's as simple as that.

This is definitely nice, but what if we could specify our own aliases for the most used commands?! The answer is, of course we can!

One of the commands I use extremely often is “cd ..” to change the working directory to the parent folder. I have spent so much time hitting the same keys…

One day I decided it was enough and I set up an alias!

To create a new alias type “alias ” the alias name, in my case I have chosen “..” followed by “=” and finally the command we want an alias for enclosed in single quotes.

Here is an example below.

Bash Tips and Tricks 2

Functions

Sometimes you will have the need to automate a complex command, perhaps accept arguments as input. Under these constraints, aliases will not be enough to accomplish your goal, but no worries. There is always a way out!

Functions give you the ability to create complex custom commands which can be called directly from the terminal like any other command.

For instance, there are two consecutive actions I do all the time, creating a folder and then cd into it. To avoid the hassle of typing “mkdir newfolder” and then “cd newfolder” i have create a bash function called “mkcd” which takes the name of the folder to be created as argument, create the folder and cd into it.

To declare a new function, we need to type the function name “mkcd ” follower by “()” and our complex command enclosed in curly brackets “{ mkdir -vp "$@" && cd "$@"; }”

Case Study: Success of Pardus GNU/Linux Migration [Linux Journal - The Original Magazine of the Linux Community]

Pardus GNU/Linux Migration

Eyüpsultan Municipality decided to use an open source operating system in desktop computers in 2015.

The most important goal of the project was to ensure information security and reduce foreign dependency.

As a result of the research and analyzes prepared, a detailed migration plan was prepared.

As a first step, licensed office software installed on all computers has been removed. LibreOffice software was installed instead.

Later, LibreOffice training was given to the municipal staff.

Pardus GNU/Linux

Meanwhile, preparations were made for the operating system migration.

Instead of the existing licensed operating system, Turkey's developed Pardus GNU / Linux distribution was decided to use.

Applications on the Pardus GNU / linux operating system were examined in detail and unnecessary applications were removed.

And a new ISO file was created with the applications used in Eyüpsultan municipality.

This process automated the setup steps and reduced setup time.

While the project continued at full speed, the staff were again trained on LibreOffice and Pardus GNU / linux.

After their training, the users took the exam.

The Pardus GNU / Linux operating system was installed on the computers of the successful ones.

Those who failed were retrained and took the exam again.

As of 2016, 25% of a computer's operating system migration was completed.

Immigration Project Implementation Steps

Analysis

A detailed inventory of all software and hardware products used in the institution was created. The analysis should go down to the department, unit and personnel details.

It should be evaluated whether extra costs will arise in the migration project.

Planning

Migration plan should be prepared, migration targets should be determined.

The duration of the migration should be calculated and the team that will carry out the migration should be determined.

Production

You can use an existing Linux distribution.

Or you can customize the distribution you will use according to your own preferences.

Making a customized ISO file will give you speed and flexibility.ISO file icon

It also helps you compensate for the loss of time caused by incorrect entries.

Test

Start using the ISO file you have prepared in a lab environment consisting of the hardware you use.

Look for solutions, noting any problems encountered during and after installation.

BPF For Observability: Getting Started Quickly [Linux Journal - The Original Magazine of the Linux Community]

Linux BPF For Observability: Getting Started Quickly

How and Why for BPF

BPF is a powerful component in the Linux kernel and the tools that make use of it are vastly varied and numerous. In this article we examine the general usefulness of BPF and guide you on a path towards taking advantage of BPF’s utility and power. One aspect of BPF, like many technologies, is that at first blush it can appear overwhelming. We seek to remove that feeling and to get you started.

What is BPF?

BPF is the name, and no longer an acronym, but it was originally Berkeley Packet Filter and then eBPF for Extended BPF, and now just BPF. BPF is a kernel and user-space observability scheme for Linux.

A description is that BPF is a verified-to-be-safe, fast to switch-to, mechanism, for running code in Linux kernel space to react to events such as function calls, function returns, and trace points in kernel or user space.

To use BPF one runs a program that is translated to instructions that will be run in kernel space. Those instructions may be interpreted or translated to native instructions. For most users it doesn’t matter the exact nature.

While in the kernel, the BPF code can perform actions for events, like, create stack traces, count the events or collect counts into buckets for histograms.

Through this BPF programs provide both fast and immensely powerful and flexible means for deep observability of what is going on in the Linux kernel or in user space. Observability into user space from kernel space is possible, of course, because the kernel can control and observe code executing in user mode.

Running BPF programs amounts to having a user program make BPF system calls which are checked for appropriate privileges and verified to execute within limits. For example, in the Linux kernel version 5.4.44, the BPF system call checks for privilege with:

if (sysctl_unprivileged_bpf_disabled && !capable(CAP_SYS_ADMIN))

return -EPERM;

The BPF system call checks for a sysctl controlled value and for a capability. The sysctl variable can be set to one with the command

sysctl kernel.unprivileged_bpf_disabled=1

but to set it to zero you must reboot and make sure to not have your system configured to set it to one at boot time.

Because BPF is doing the work in kernel space significant time and overhead is saved avoiding context switches and by not necessitating transferring large amounts of data back to user space.

Not all kernel functions can be traced. For example if you were to try funccount-bpfcc '*_copy_to_user' you may get output like:

cannot attach kprobe, Invalid argument

Failed to attach BPF program b'trace_count_3' to kprobe

b'_copy_to_user'

This is kind of mysterious. If you check the output from dmesg you would see something like:

A Linux Survey For Beginners [Linux Journal - The Original Magazine of the Linux Community]

Linux For Beginners

So you have decided to give the Linux operating system a try. You have heard it is a good stable operating system with lots of free software and you are ready to give it a shot. It is downloadable for free, so you get on the net and search for a copy, and you are in for a shock. Because there isn’t one “Linux”, there are many. Now you feel like a deer in the headlights. You want to make a wise choice, but have no idea where to start. Unfortunately, this is where a lot new Linux users give up. It is just too confusing.

The many versions of Linux are often referred to as “flavors” or distributions. Imagine yourself in an ice cream shop displaying 30+ flavors. They all look delicious, but it’s hard to pick one and try it. You may find yourself confused by the many choices but you can be sure you will leave with something delicious. Picking a Linux flavor should be viewed in the same way.

As with ice cream lovers, Linux users have their favorites, so you will hear people profess which is the “best”. Of course, the best is the one that you conclude, will fit your needs. That might not be the first one you try. According to linuxquestions.org there are currently 481 distributions, but you don’t need to consider every one. The same source lists these distributions as “popular”: Ubuntu, Fedora, Linux Mint, OpenSUSE, PCLinuxOS, Debian, Mageia, Slackware, CentOS, Puppy, Arch. Personally I have only tried about five of these and I have been a Linux user for more than 20 years. Today, I mostly use Fedora.

Many of these also have derivatives that are made for special purpose uses. For example, Fedora lists special releases for Astronomy, Comp Neuro, Design Suite, Games, Jam, Python Classroom, Security Lab, Robotics Suite. All of these are still Fedora, but the installation includes a large quantity of programs for the specific purpose. Often a particular set of uses can spawn a whole new distribution with a new name. If you have a special interest, you can still install the general one (Workstation) and update later.

Very likely one of these systems will suit you. Even within these there are subtypes and “windows treatments” to customize your operating system. Gnome, Xfce, LXDE, and so on are different windows treatments available in all of the Linux flavors. Some try to look like MS windows, some try to look like a Mac. Some try to be original, light weight, graphically awesome. But that is best left for another article. You are running Linux no matter which of those you choose. If you don’t like the one you choose, you can try another without losing anything. You also need to know that some of these distributions are related, so that can help simplify your choice.

 

Terminal Vitality [Linux Journal - The Original Magazine of the Linux Community]

Terminal Vitality - Difference Engine

Ever since Douglas Engelbart flipped over a trackball and discovered a mouse, our interactions with computers have shifted from linguistics to hieroglyphics. That is, instead of typing commands at a prompt in what we now call a Command Line Interface (CLI), we click little icons and drag them to other little icons to guide our machines to perform the tasks we desire. 

Apple led the way to commercialization of this concept we now call the Graphical User Interface (GUI), replacing its pioneering and mostly keyboard-driven Apple // microcomputer with the original GUI-only Macintosh. After quickly responding with an almost unusable Windows 1.0 release, Microsoft piled on in later versions with the Start menu and push button toolbars that together solidified mouse-driven operating systems as the default interface for the rest of us. Linux, along with its inspiration Unix, had long championed many users running many programs simultaneously through an insanely powerful CLI. It thus joined the GUI party late with its likewise insanely powerful yet famously insecure X-Windows framework and the many GUIs such as KDE and Gnome that it eventually supported.

GUI Linux

But for many years the primary role for X-Windows on Linux was gratifyingly appropriate given its name - to manage a swarm of xterm windows, each running a CLI. It's not that Linux is in any way incompatible with the Windows / Icon / Mouse / Pointer style of program interaction - the acronym this time being left as an exercise for the discerning reader. It's that we like to get things done. And in many fields where the progeny of Charles Babbage's original Analytic Engine are useful, directing the tasks we desire is often much faster through linguistics than by clicking and dragging icons.

 

GUI Linux Terminal
A tiling window manager makes xterm overload more manageable

 

A GUI certainly made organizing many terminal sessions more visual on Linux, although not necessarily more practical. During one stint of my lengthy engineering career, I was building much software using dozens of computers across a network, and discovered the charms and challenges of managing them all through Gnu's screen tool. Not only could a single terminal or xterm contain many command line sessions from many computers across the network, but I could also disconnect from them all as they went about their work, drive home, and reconnect to see how the work was progressing. This was quite remarkable in the early 1990s, when Windows 2 and Mac OS 6 ruled the world. It's rather remarkable even today.

Bashing GUIs

Building A Dashcam With The Raspberry Pi Zero W [Linux Journal - The Original Magazine of the Linux Community]

raspberry-pi-zero-w

I've been playing around with the Raspberry Pi Zero W lately and having so much fun on the command line. For those uninitiated it's a tiny Arm computer running Raspbian, a derivative of Debian. It has a 1 GHz processor that had the ability to be overclocked and 512 MB of RAM, in addition to wireless g and bluetooth.

raspberry pi zero w with wireless g and bluetooth

A few weeks ago I built a garage door opener with video and accessible via the net. I wanted to do something a bit different and settled on a dashcam for my brother-in-law's SUV.

I wanted the camera and Pi Zero W mounted on the dashboard and to be removed with ease. On boot it should autostart the RamDashCam (RDC) and there should also be 4 desktop scripts dashcam.sh, startdashcam.sh, stopdashcam.sh, shutdownshutdown.sh. Also create and a folder named video on the Desktop for the older video files. I also needed a way to power the RDC when there is no power to the vehicle's usb ports. Lastly I wanted it's data accessible on the local LAN when the vehicle is at home.

Here is the parts list:

  1. Raspberry Pi Zero W kit (I got mine from Vilros.com)
  2. Raspberry Pi official camera
  3. Micro SD card, at least 32 gigs
  4. A 3d printed case from thingverse.com
  5. Portable charger, usually used to charge cell phones and tablets on the go
  6. Command strips, it's like double sided tape that's easy to remove or velcro strips

 

First I flashed the SD card with Raspbian, powered it up and followed the setup menu. I also set a static IP address.

Now to the fun stuff. Lets create a service so we can start and stop RDC via systemd. Using your favorite editor, navigate to "/etc/systemd/system/" and create "dashcam.service"  and add the following:

[Unit]
Description=dashcam service
After=network.target
StartLimitIntervalSec=0

[Service]
Type=forking
Restart=on-failure
RestartSec=1
User=pi
WorkingDirectory=/home/pi/Desktop
ExecStart=/bin/bash /home/pi/Desktop/startdashcam.sh

[Install]
WantedBy=multi-user.target

 

Now that's complete lets enable the service, run the following: sudo systemctl enable dashcam

I added these scripts to start and stop RDC on the Desktop so my brother-in-law doesn't have to mess around in the menus or command line. Remember to "chmod +x" these 4 scripts.

 

startdashcam.sh

#!/bin/bash

# remove files older than 3 days
find /home/pi/Desktopvideo -type f -iname '*.flv' -mtime +3 -exec rm {} \;

# start dashcam service
sudo systemctl start dashcam

 

stopdashcam.sh

SeaGL - Seattle GNU/Linux Conference Happening This Weekend! [Linux Journal - The Original Magazine of the Linux Community]

SeaGL - Seattle GNU/Linux Conference

This Friday, November 13th and Saturday, November 14th, from 9am to 4pm PST the 8th annual SeaGL will be held virtually. This year features four keynotes, and a mix of talks on FOSS tech, community and history. SeaGL is absolutely free to attend and is being run with free software!

Additionally, we are hosting a pre-event career expo on Thursday, November 12th from 1pm to 5pm. Counselors will be available for 30 minute video sessions to provide resume reviews and career guidance.

Mission

The Seattle GNU/Linux conference (SeaGL) is a free, as in freedom and tea, grassroots technical summit dedicated to spreading awareness and knowledge about free/libre/open source software, hardware, and culture.

SeaGL strives to be welcoming, enjoyable, and informative for professional technologists, newcomers, enthusiasts, and all other users of free software, regardless of their background knowledge; providing a space to bridge these experiences and strengthen the free software movement through mentorship, collaboration, and community.

Dates/Times

  • November 13th and 14th
  • Friday and Saturday
  • Main Event: 9am-4:30pm
  • TeaGL: 1-2:45pm, both days
  • Friday Social: 4:30-6pm
  • Saturday Party: 6-10pm
  • Pre-event Career Expo: 1-5pm, Thursday November 12th
  • All times in Pacific Timezone

Hashtags

- `#SeaGL2020`

- `#TeaGLtoasts`

Social Media

Reference Links

Best contact: press@seagl.org

15-12-2020

17:58

Firefox 84 Released, Enables WebRender by Default on Linux [OMG! Ubuntu!]

Mozilla Firefox LogoFirefox 84 has been released. We rundown its key changes and new features, which includes tweaks to improve the browser's performance on Linux systems.

This post, Firefox 84 Released, Enables WebRender by Default on Linux is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

14-12-2020

10:32

Linux Kernel 5.10 LTS Released, This is What’s New [OMG! Ubuntu!]

Linux Kernel 5.10The Linux 5.10 kernel release is packed to the hilt with big changes, performance enhancements, and new drivers. Linux 5.10 is a Long Term Support release.

This post, Linux Kernel 5.10 LTS Released, This is What’s New is from OMG! Ubuntu!. Do not reproduce elsewhere without permission.

09-12-2020

13-11-2020

17:24

30 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

29 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

28 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

27 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

26 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden

25 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

24 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

23 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

22 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

21 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

20 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

19 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

18 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

17 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

16 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

15 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

14 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

13 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

12 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

11 april 2019 [GNOMON]

Op 1 mei 2019 bestaat mijn blog 10 jaar en dan hou ik er (voorlopig) mee op. Het is meteen tijd om dit blog bij te werken en me bezig te houden m

Python GUI applicatie consistent backups met fsarchiver [linux blogs franz ulenaers]

Python GUI applicatie consistent backups maken met fsarchiver



Een partitie van het type = "Linux LVM" kan gebruikt worden voor logische volumen maar ook als "snapshot" !
Een snapshot kan een exact kopie zijn van een logische volume dat bevrozen is op een bepaald ogenblik : dit maakt het mogelijk om consistente backups te maken van logische volumen
terwijl de logische volumen in gebruik zijn !





Mijn fysische en logische volumen zijn als volgt aangemaakt :

    fysische volume

      pvcreate /dev/sda1

    fysische volume groep

      vgcreate mydell /dev/sda1

    logische volumen

      lvcreate -L 1G -n boot mydell

      lvcreate -L 100G -n data mydell

      lvcreate -L 50G -n home mydell

      lvcreate -L 50G -n root mydell

      lvcreate -L 1G swap mydell







beginscherm

LVM Logische volumen [linux blogs franz ulenaers]

LVM = Logical Volume Manager



Een partitie van het type = "Linux LVM" kan gebruikt worden voor logische volumen maar ook als "snapshot" !
Een snapshot kan een exact kopie zijn van een logische volume dat bevrozen is op een bepaald ogenblik : dit maakt het mogelijk om consistente backups te maken van logische volumen
terwijl de logische volumen in gebruik zijn !

Hoe installeren ?

    sudo apt-get install lvm2



Creëer een fysisch volume voor een partitie

    commando = ‘pvcreate’ partitie

      voorbeeld :

        partitie moet van het type = "Linux LVM" zijn !

        pvcreate /dev/sda5



creëer een fysisch volume groep

    vgcreate vg_storage partitie

      voorbeeld

        vgcreate mijnvg /dev/sda5



voeg een logische volume toe in een volume groep

    lvcreate -L grootte_in_M/G -n logische_volume_naam volume_groep

      voorbeeld :

        lvcreate -L 30G -n mijnhome mijnvg



activeer een volume groep

    vgchange -a y naam_volume_groep

      voorbeeld :

        vgchange -a y mijnvg



Mijn fysische en logische volumen

    fysische volume

      pvcreate /dev/sda1

    fysische volume groep

      vgcreate mydell /dev/sda1

    logische volumen

      lvcreate -L 1G -n boot mydell

      lvcreate -L 100G -n data mydell

      lvcreate -L 50G -n home mydell

      lvcreate -L 50G -n root mydell

      lvcreate -L 1G swap mydell



Logische volume vergroten/verkleinen

    mijn home logische volume vergroten met 1 G

      lvextend -L +1G /dev/mapper/mydell-home

    let op een logische volume verkleinen kan leiden tot gegevens verlies indien er te weinig plaats is .... !

lvreduce -L -1G /dev/mapper/mydell-home



toon fysische volume

sudo pvs

    worden getoond : PV fysische volume , VG volume groep , Fmt formaat (normaal = lvm2) , Attr attribuut, Psize groote PV, PFree vtije plaats

      PV             VG       Fmt  Attr PSize      PFree

      /dev/sda6 mydell lvm2   a--  920,68g  500,63g

sudo pvs -a

sudo pvs /dev/sda6



Backup instellingen Logische volumen

    zie bijgeleverde script LVM_bkup



toon volume groep

    sudo vgs

VG       #PV #LV #SN  Attr    VSize     VFree

mydell    1       6       0    wz--n- 920,68g 500,63g



toon logische volume(n)

    sudo lvs

      LV            VG     Attr        LSize   Pool Origin Data% Meta% Move Log Cpy%Sync Convert

      boot       mydell -wi-ao---- 952,00m

      data       mydell -wi-ao---- 100,00g

      home      mydell -wi-ao----  93,13g

      mintroot mydell -wi-a----- 101,00g

      root        mydell -wi-ao----  94,06g

      swap       mydell -wi-ao----  30,93g



hoe een logische volume wegdoen ?

    een logische volume wegdoen kan enkel maar als de fysische volume niet actief is

      dit kan met het vgchange commando

        vgchange -a n mydell

    lvremove /dev//mijn_volumegroup/naam_logische-volume

      voorbeeld :

lvremove /dev/mydell/data





hoe een fysische volume wegdoen ?

vgreduce mydell /dev/sda1




Bijlagen: LVM_bkup (0.8 KLB)




hoe een stick mounten en umounten zonder root te zijn en met je eigen rwx rechten ! [linux blogs franz ulenaers]

Stick mounten zonder root

hoe usb stick mounten en umounten zonder root te zijn en met rwx rechten ?
---------------------------------------------------------------------------------------------------------
(hernoem iedere ulefr01 naar je eigen gebruikersnaam!)

label stick

  • gebruik het 'fatlabel' commando om een volumenaam of label toe te kennen dit als je een vfat bestandensysteem gebruikt op je usb-stick

  • gebruik het commando 'tune2fs' voor een ext2,3,4

    • om een volumenaam stick32GB te maken op je usb_stick doe je met het commando :

sudo tune2fs -L stick32GB /dev/sdc1

noot : gebruik voor /dev/sdc1 hier het juiste device !


maak het filesysteem op je stick clean

  • mogelijk na het mounten zie dmesg messages : Volume was not properly unmounted. Some data may be corrupt. Please run fsck.

    • gebruik de file system consistency check commando fsck om dit recht te zetten

      • doe een umount voordat je het commando fsck uitvoer ! (gebruik het juiste device !)

        • fsck /dev/sdc1

noot: gebruik voor /dev/sdc1 hier je device !


rechten zetten op mappen en bestanden van je stick

  • Steek je stick in een usb poort en umount je stick

sudo chown ulefr01:ulefr01 /media/ulefr01/ -R
  • zet acl op je ext2,3,4 stick (werkt niet op een vfat !)

setfacl -m u:ulefr01:rwx /media/ulefr01
  • met getfact kun je acl zien

getfacl /media/ulefr01
  • met het ls commando kun je het resultaat zien

ls /media/ulefr01 -dla

drwxrwx--- 5 ulefr01 ulefr01 4096 okt 1 18:40 /media/ulefr01

noot: indien de ‘+’ aanwezig is dan is acl reeds aanwezig, zoals op volgende lijn :

drwxrwx---+ 5 ulefr01 ulefr01 4096 okt 1 18:40 /media/ulefr01


Mount stick

  • Steek je stick in een usb poort en kijk of mounten automatisch gebeurd

  • check rechten van bestaande bestanden en mappen op je stick

ls * -la

  • indien root of andere rechten reeds aanwezig , herzetten met volgend commando

sudo chown ulefr01:ulefr01 /media/ulefr01/stick32GB -R

Maak map voor ieder stick

  • cd /media/ulefr01

  • mkdir mmcblk16G stick32GB stick16gb


aanpassen /etc/fstab

  • voeg een lijn toe voor iedere stick

    • voorbeelden

LABEL=mmcblk16G /media/ulefr01/mmcblk16G ext4 user,exec,defaults,noatime,acl,noauto 0 0
LABEL=stick32GB /media/ulefr01/stick32GB ext4 user,exec,defaults,noatime,acl,noauto 0 0
LABEL=stick16gb /media/ulefr01/stick16gb vfat user,defaults,noauto 0 0


Check het volgende

  • het volgende moet nu mogelijk zijn : 

    • mount en umount zonder root te zijn

    •  noot : je kunt de umount niet doen als de mount gedaan is door root ! Indien dat het geval is dan moet je eerst de umount met root ; daarna de mount als gebruiker dan kun je ook de umount doen . 

    • zet een nieuw bestand op je stick zonder root te zijn

    • zet een nieuw map op je stick zonder root te zijn

  • check of je nieuwe bestanden kunt aanmaken zonder root te zijn

        • touch test

        • ls test -la

        • rm test


procedures MyCloud [linux blogs franz ulenaers]

Procedures MyCloud

  • Procedure lftpUlefr01Cloudupload wordt gebruikt om een upload te doen van bestanden en mappen naar MyCloud

  • Procedure lftpUlefr01Cloudmirror wordt gebruikt om wijzigingen terug te halen 


Beide procedures maken gebruik van het programma lftp ( dit is "Sophisticated file transfer program" ) en worden gebruikt om synchronisatie van laptop en desktop toe te laten 


Procedures werden aangepast zodat verborgen bestanden en verborgen mappen ook worden verwerkt ,

alsook werden voor mirror bepaalde meestal onveranderde bestanden en mappen uitgefilterd (--exclude) zodanig dat deze niet opnieuw worden verwerkt

op Cloud blijven ze bestaan als backup maar op de verschillende laptops niet (dit werd gedaan voor oudere mails van 2016 maanden 2016-11 en 2016-12

en voor alle vorige maanden (dit tot en met september) van 2017 !

  • zie bijlagen


Zet acl list [linux blogs franz ulenaers]

setfacl

noot: meestal mogelijk op linux bestandsystemen : btrfs, ext2, ext3, ext4 en Reiserfs  !

  • Hoe een acl zetten voor één gebruiker ?

setfacl -m u:ulefr01:rwx /home/ulefr01

noot: kies ipv ulefr01 hier je eigen gebruikersnaam

  • Hoe een acl afzetten ?

setfacl -x u:ulefr01 /home/ulefr01
  • Hoe een acl zetten voor twee of meer gebruikers ?

setfacl -m u:ulefr01:rwx /home/ulefr01

setfacl -m u:myriam:r-x /home/ulefr01

noot: kies ipv myriam je tweede gebruikersnaam; hier heeft myriam geen w write toegang maar wel r read en x exec !

  • Hoe een lijst opvragen van de ingestelde acl ?

getfacl home/ulefr01
getfacl: Voorafgaande '/' in absolute padnamen worden verwijderd
# file: home/ulefr01
# owner: ulefr01
# group: ulefr01
user::rwx
user:ulefr01:rwx
user:myriam:r-x 
group::---
mask::rwx
other::--- 
  • Hoe het resultaat nakijken ?

getfacl home/ulefr01
 zie hierboven
ls /home/ulefr01 -dla
drwxrwx---+  ulefr01 ulefr01 4096 okt 1 18:40  /home/ulefr01

zie + sign !


python GUI applicatie tune2fs [linux blogs franz ulenaers]

python GUI applicatie tune2fs comando

Created woensdag 18 oktober 2017

geschreven met programmeertaal python met gebruik van Gtk+ 3 

starten in terminal met : sudo python mytune2fs.py

ofwel python source compileren en starten met gecompileerde versie


zie bijlagen :
* pdf
* mytune2fs.py

Python GUI applicatie myarchive.py [linux blogs franz ulenaers]

python GUI applicatie backups maken met fsarchiver

Created vrijdag 13 oktober 2017

GUI applicatie backups maken, achiveerinfo en restore met fsarchiver

zie bijgeleverde bestand : python_GUI_applicatie_backups_maken_met_fsarchiver.pdf


start in terminal mode met : 

* sudo python myarchive.py

* sudo python myarchive2.py

ofwel door gecompileerde versie te maken en de gegeneerde objecten te starten


python myfsck.py [linux blogs franz ulenaers]

python GUI applicatie fsck commando

Created vrijdag 13 oktober 2017

zie bijgeleverd bestand myfsck.py

Deze applicatie kan devices mounten en umounten maar is hoofdzakelijk bedoeld om het fsck comando uit te voeren

Root rechten zijn nodig !

hulp ?

* starten in terminal mode 

* sudo python myfsck.py


Het beste bestandensysteem (meest performant) op een USB stick , hoe opzetten ? [linux blogs franz ulenaers]

het beste bestandensysteem op een USB stick, hoe opzetten ?

het beste bestandensysteem (meest performant) is ext4

  • hoe opzetten ?

mkfs.ext4 $device
  • zet eerst journal af

tune2fs -O ^has_journal $device
  • doe journaling alleen met data_writeback

tune2fs -o journal_data_writeback $device
  • gebruik geen reserved spaces en zet het op nul.

tune2fs -m 0 $device


  • voor bovenstaande 3 acties kan bijgeleverde bash script gebruikt worden :



bestand USBperf

# USBperfext4


echo 'USBperf'

echo '--------'

echo 'ext4 device ?'

read device

echo "device= $device"

echo 'ok ?'

read ok

if [ $ok == ' ' ] || [ $ok == 'n' ] || [ $ok == 'N' ]

then

   echo 'nok - dus stoppen'

   exit 1

fi

echo "doe : no journaling ! tune2fs -O ^has_journal $device"

tune2fs -O ^has_journal $device

echo "use data mode for filesystem as writeback doe : tune2fs -o journal_data $device"

tune2fs -o journal_data_writeback $device

echo "disable reserved space "

tune2fs -m 0 $device

echo 'gedaan !'

read ok

echo "device= $device" 

exit 0


  • pas bestand /etc/fstab aan voor je USB

    • gebruik optie ‘noatime’

Maken dat een bestand niet te wijzigen , niet te hernoemen is niet te deleten is in linux ! [linux blogs franz ulenaers]

Maken dat een bestand niet te wijzigen , niet te hernoemen is niet te deleten is in linux !


bestand .encfs6.xml


hoe : sudo chattr +i /data/Encrypt/.encfs6.xml

je kunt het bestand niet wijzigen, je kunt het bestand niet hernoemen, je kunt het bestand niet deleten zelfs als je root zijt

  • zet attribuut
  • status bekijken
    • lsattr .encfs6.xml
      • ----i--------e-- .encfs6.xml
        • de i betekent immutable
  • om immutable attribuut weg te doen
    • chattr -i .encfs6.xml



Backup laptop [linux blogs franz ulenaers]

laptop heeft een multiboot = windows 7 met encryptie en Linux Mint
backup van mijn laptop , zie http://users.telenet.be/franz.ulenaers/laptopca-new.html

Encryptie [linux blogs franz ulenaers]

Met encryptie kan men de gegevens op je computer beveiligen, door de gegevens onleesbaar maken voor de buitenwereld !

Hoe kan men een bestandssysteem encrypteren ?

installeer de volgende open source pakketten :

    loop-aes-utils en cryptsetup

            apt-get install loop-aes-utils

            apt-get install cryptsetup

        modprobe cryptoloop
        voeg de volgende modules toe in je /etc/modules :
            aes
            dm_mod
           
dm_crypt
           
cryptoloop

Hoe een beveiligd bestandsysteem aanmaken ?

  1. dd if=/dev/zero of=/home/cryptfile bs=1M count=650
hiermee creëer je een bestand van 650 M groot
  1. losetup -e aes /dev/loop0 /home/cryptfile
hierna wordt een paswoord gevraagd van minstens 20 karakters
  1. mkfs.ext3 /dev/loop0
maakt een ext3 bestandssysteem met journaling
  1. mkdir /mnt/crypt
                maakt een lege directory aan
  1. mount /dev/loop0 /mnt/crypt -t ext3
nu hebt je een bestandssysteem onder /mnt/crypt ter beschikking

....

Je kunt automatisch je bestandssysteem beschikbaar maken door een volgende entry in je /etc/fstab :

/home/cryptfile /mnt/crypt ext3 auto,encryption=aes,user,exec 0 0

....

Je kunt je encryptie afzetten dmv.

umount /mnt/crypt


losetup -d /dev/loop0        (dit is niet meer nodig als je de volgende entry in jet /etc/fstab hebt :
                /home/cryptfile /mnt/crypt ext3 auto,encryption=aes,exec 0 0
....
Manueel mounten kun je met :
  • losetup -e aes /dev/loop0 /home/cryptfile
 er wordt gevraagd een paswoord van minstens 20 karakters in te vullen
indien het paswoord verkeerd is dan krijg je de volgende melding :
        mount: wrong fs type, bad option, bad superblock on /dev/loop0,
        or too many mounted file systems
        ..
  • mount /dev/loop0 /mnt/crypt -t ext3
hiermee kunt je het bestandssysteem mounten


Linken in Linux [linux blogs franz ulenaers]

Op Linux kan men bestanden meervoudige benamingen geven, zo kun je een bestand op verschillende plaatsen in de boomstructuur van de bestanden opslaan , zonder extra plaats op harde schijf in te nemen (+-).

Er zijn twee soorten links :

  1. harde links

  2. symbolische links

Een harde link maakt gebruik van hetzelfde bestandsnummer (inode).

Een harde link geldt niet voor een directory !

Een harde link moet op zelfde bestandssysteem en oorspronkelijk bestand moet bestaan !

Een symbolische link , het bestand krijgt een nieuw bestandsnummer , het bestand waarop verwezen wordt hoeft niet te bestaan.

Een symbolische link gaat ook voor een directory.

bash-shell gebruiker ulefr01

pwd
/home/ulefr01/cgcles/linux
ls linuxcursus.odt -ila
293800 -rw-r--r-- 1 ulefr01 ulefr01 4251348 2005-12-17 21:11 linuxcursus.odt

Het bestand linuxcursus is 4,2M groot, inode nr 293800.

bash-shell gebruiker tom

pwd
/home/tom
ln /home/ulefr01/cgcles/linux/linuxcursus.odt cursuslinux.odt
tom@franz3:~ $ ls cursuslinux.odt -il
293800 -rw-r--r-- 2 ulefr01 ulefr01 4251348 2005-12-17 21:11 cursuslinux.odt
geen extra plaats van 4,2M, zelfde inode nr 293800 !

bash-shell gebruiker root

pwd
/root
root@franz3:~ # ln /home/ulefr01/cgcles/linux/linuxcursus.odt linuxcursus.odt
root@franz3:~ # ls -il linux*
293800 -rw-rw-r-- 3 ulefr01 ulefr01 4251300 2005-12-17 21:31 linuxcursus.odt
geen extra plaats van 4,2M, zelfde inode nr 293800 !

bash-shell gebruiker ulefr01, symbolische link

ln -s cgcles/linux/linuxcursus.odt linuxcursus.odt
ulefr01@franz3:~ $ ls -il linuxcursus.odt
1191741 lrwxrwxrwx 1 ulefr01 ulefr01 28 2005-12-17 21:42 linuxcursus.odt -> cgcles/linux/linuxcursus.odt
slechts 28 bytes

ln -s linuxcursus.odt test.odt
1191898 lrwxrwxrwx 1 ulefr01 ulefr01 15 2005-12-17 22:00 test.odt -> linuxcursus.odt
slechts 15 bytes

rm linuxcursus.odt
ulefr01@franz3:~ $ ls *.odt -il
1193723 -rw-r--r-- 1 ulefr01 ulefr01 27521 2005-11-23 20:11 Backup&restore.odt
1193942 -rw-r--r-- 1 ulefr01 ulefr01 13535 2005-11-26 16:11 doc.odt
1191933 -rw------- 1 ulefr01 ulefr01 6135 2005-12-06 12:00 fru.odt
1193753 -rw-r--r-- 1 ulefr01 ulefr01 19865 2005-11-23 22:44 harddiskdata.odt
1193576 -rw-r--r-- 1 ulefr01 ulefr01 7198 2005-11-26 21:46 ooo-1.odt
1191749 -rw------- 1 ulefr01 ulefr01 22542 2005-12-06 16:16 Regen.odt
1191898 lrwxrwxrwx 1 ulefr01 ulefr01 15 2005-12-17 22:00 test.odt -> linuxcursus.odt
test.odt verwijst naar een bestand dat niet bestaat !

04-11-2020

17:45

The Document Foundation releases LibreOffice 7.0.3 [Press Releases – The Document Foundation Blog]

Berlin, October 29, 2020 – LibreOffice 7.0.3, the third minor release of the LibreOffice 7.0 family, targeted at technology enthusiasts and power users, is now available for download from https://www.libreoffice.org/download/, ahead of the planned schedule. LibreOffice 7.0.3 includes over 90 bug fixes, including Calc issues introduced with 7.0.2, and improvements to document compatibility.

LibreOffice offers the highest level of compatibility in the office suite arena, starting from native support for the OpenDocument Format (ODF) – with better security and interoperability features – to wide support for proprietary formats.

LibreOffice 7.0.3 represents the bleeding edge in term of features for open source office suites. Users wanting the robustness of a more mature version optimized for enterprise class deployments can still download LibreOffice 6.4.7.

For enterprise class deployments, TDF strongly recommends sourcing LibreOffice from one of the ecosystem partners, to get long-term supported releases, dedicated assistance, custom new features and other benefits, including SLAs (Service Level Agreements): https://www.libreoffice.org/download/libreoffice-in-business/.

Support for migrations and training should be sourced from certified professionals who provide value-added services which extend the reach of the community to the corporate world. Also, the work done by ecosystem partners flows back into the LibreOffice project, and this represents an advantage for everyone.

LibreOffice individual users are supported by a global community of volunteers: https://www.libreoffice.org/get-help/community-support/. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help us to make all of these resources available.

LibreOffice users are invited to join the community at https://ask.libreoffice.org, where they can get and provide user-to-user support. People willing to contribute their time and professional skills to the project can visit the dedicated website at https://whatcanidoforlibreoffice.org.

LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools at https://www.libreoffice.org/donate.

Availability of LibreOffice

LibreOffice 7.0.3 and 6.4.7 are immediately available from the following link: https://www.libreoffice.org/download/. Minimum requirements are specified on the download page. LibreOffice Online source code is available as Docker image: https://hub.docker.com/r/libreoffice/online/.

LibreOffice 7.0.3’s change log page is available on TDF’s wiki: https://wiki.documentfoundation.org/Releases/7.0.3/RC1 (changed in RC1).

All versions of LibreOffice are built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org.

The Document Foundation announces LibreOffice 6.4.7 [Press Releases – The Document Foundation Blog]

Berlin, October 22, 2020 – The Document Foundation announces the availability of LibreOffice 6.4.7, the 7th and last minor release of the LibreOffice 6.4 family, targeted at users relying on the application for desktop productivity. LibreOffice 6.4.7 includes bug fixes and improvements to document compatibility and interoperability with software from other vendors.

Enterprises are strongly recommended to source LibreOffice from an ecosystem partner, to get long-term supported (LTS) releases, dedicated assistance, custom new features and other benefits, including SLAs (Service Level Agreements): https://www.libreoffice.org/download/libreoffice-in-business/.

Developments done by ecosystem partners flows back into the LibreOffice project, and this represents an advantage for everyone.

LibreOffice individual users are supported by a global community of volunteers: https://www.libreoffice.org/get-help/community-support/. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help us to make all of these resources available.

Availability of LibreOffice 6.4.7

LibreOffice 6.4.7 is immediately available from the following link: https://www.libreoffice.org/download/. Minimum requirements are specified on the download page. TDF builds of the latest LibreOffice Online source code are available as Docker images: https://hub.docker.com/r/libreoffice/online/.

LibreOffice 6.4.7’s change log pages are available on TDF’s wiki: https://wiki.documentfoundation.org/Releases/6.4.7/RC1 (changed in RC1) and https://wiki.documentfoundation.org/Releases/6.4.7/RC2 (changed in RC2).

All versions of LibreOffice are built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org.

Support LibreOffice

LibreOffice users are invited to join the community at https://ask.libreoffice.org, where they can get and provide user-to-user support. People willing to contribute their time and professional skills to the project can visit the dedicated website at https://whatcanidoforlibreoffice.org.

LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools at https://www.libreoffice.org/donate.

Announcement of LibreOffice 7.0.2 [Press Releases – The Document Foundation Blog]

Berlin, October 8, 2020 – LibreOffice 7.0.2, the second minor release of the LibreOffice 7.0 family, targeted at technology enthusiasts and power users, is now available for download from https://www.libreoffice.org/download/. LibreOffice 7.0.2 includes over 130 bug fixes and improvements to document compatibility.

The most significant new features of the LibreOffice 7.0 family are: support for OpenDocument Format (ODF) 1.3; Skia graphics engine and Vulkan GPU-based acceleration for better performance; and carefully improved compatibility with DOCX, XLSX and PPTX files.

LibreOffice offers the highest level of compatibility in the office suite arena, starting from native support for the OpenDocument Format (ODF) – with better security and interoperability features – to wide support for proprietary formats.

LibreOffice 7.0.2 represents the bleeding edge in term of features for open source office suites. Users wanting the robustness of a more mature version optimized for enterprise class deployments can still download LibreOffice 6.4.6.

For enterprise class deployments, TDF strongly recommends sourcing LibreOffice from one of the ecosystem partners, to get long-term supported releases, dedicated assistance, custom new features and other benefits, including SLAs (Service Level Agreements): https://www.libreoffice.org/download/libreoffice-in-business/.

Support for migrations and training should be sourced from certified professionals who provide value-added services which extend the reach of the community to the corporate world. Also, the work done by ecosystem partners flows back into the LibreOffice project, and this represents an advantage for everyone.

LibreOffice – thanks to its mature codebase, rich feature set, support for open standards, excellent compatibility and long-term support options – represents the ideal solution for businesses that want to regain or keep control of their data and free themselves from vendor lock-in.

LibreOffice individual users are supported by a global community of volunteers: https://www.libreoffice.org/get-help/community-support/. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help us to make all of these resources available.

LibreOffice users are invited to join the community at https://ask.libreoffice.org, where they can get and provide user-to-user support. People willing to contribute their time and professional skills to the project can visit the dedicated website at https://whatcanidoforlibreoffice.org.

LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools at https://www.libreoffice.org/donate.

Availability of LibreOffice

LibreOffice 7.0.2 and 6.4.6 are immediately available from the following link: https://www.libreoffice.org/download/. Minimum requirements are specified on the download page. LibreOffice Online source code is available as Docker image: https://hub.docker.com/r/libreoffice/online/.

LibreOffice 7.0.2’s change log pages are available on TDF’s wiki: https://wiki.documentfoundation.org/Releases/7.0.2/RC1 (changed in RC1) and https://wiki.documentfoundation.org/Releases/7.0.2/RC2 (changed in RC2).

All versions of LibreOffice are built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org.

LibreOffice 7.0.1 available for download [Press Releases – The Document Foundation Blog]

Berlin, September 3, 2020 – LibreOffice 7.0.1, the first minor release of the LibreOffice 7.0 family, targeted at technology enthusiasts and power users, is now available for download from https://www.libreoffice.org/download/. LibreOffice 7.0.1 includes around 80 bug fixes and improvements to document compatibility.

The most significant new features of the LibreOffice 7.0 family are: support for OpenDocument Format (ODF) 1.3; Skia graphics engine and Vulkan GPU-based acceleration for better performance; and carefully improved compatibility with DOCX, XLSX and PPTX files.

LibreOffice offers the highest level of compatibility in the office suite arena, starting from native support for the OpenDocument Format (ODF) – with better security and interoperability features – to wide support for proprietary formats.

LibreOffice 7.0.1 represents the bleeding edge in term of features for open source office suites. Users wanting the robustness of a more mature version optimized for enterprise class deployments can still download LibreOffice 6.4.6.

For enterprise class deployments, TDF strongly recommends sourcing LibreOffice from one of the ecosystem partners, to get long-term supported releases, dedicated assistance, custom new features and other benefits, including SLAs (Service Level Agreements): https://www.libreoffice.org/download/libreoffice-in-business/.

Support for migrations and training should be sourced from certified professionals who provide value-added services which extend the reach of the community to the corporate world. Also, the work done by ecosystem partners flows back into the LibreOffice project, and this represents an advantage for everyone.

LibreOffice – thanks to its mature codebase, rich feature set, support for open standards, excellent compatibility and long-term support options – represents the ideal solution for businesses that want to regain or keep control of their data and free themselves from vendor lock-in.

LibreOffice individual users are supported by a global community of volunteers: https://www.libreoffice.org/get-help/community-support/. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help us to make all of these resources available.

LibreOffice users are invited to join the community at https://ask.libreoffice.org, where they can get and provide user-to-user support. People willing to contribute their time and professional skills to the project can visit the dedicated website at https://whatcanidoforlibreoffice.org.

LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools at https://www.libreoffice.org/donate.

Availability of LibreOffice

LibreOffice 7.0.1 and 6.4.6 are immediately available from the following link: https://www.libreoffice.org/download/. Minimum requirements are specified on the download page. LibreOffice Online source code is available as Docker image: https://hub.docker.com/r/libreoffice/online/.

LibreOffice 7.0.1’s change log pages are available on TDF’s wiki: https://wiki.documentfoundation.org/Releases/7.0.1/RC1 (changed in RC1) and https://wiki.documentfoundation.org/Releases/7.0.1/RC2 (changed in RC2).

All versions of LibreOffice are built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org.

Announcement of LibreOffice 6.4.6 [Press Releases – The Document Foundation Blog]

Berlin, August 13, 2020 – The Document Foundation announces the availability of LibreOffice 6.4.6, the 6th minor release of the LibreOffice 6.4 family, targeted at all users relying on the best free office suite ever for desktop productivity. LibreOffice 6.4.6 includes bug fixes and improvements to document compatibility and interoperability with software from other vendors.

LibreOffice 6.4.6 is optimized for use in every environment, even by more conservative users, as it now includes several months of work on bug fixes. Users of LibreOffice 6.3.6 and previous versions should update to LibreOffice 6.4.6, as this is now the best choice in term of robustness for their productivity needs.

For enterprise class deployments, TDF strongly recommends sourcing LibreOffice from one of the ecosystem partners, to get long-term supported releases, dedicated assistance, custom new features and other benefits, including SLAs (Service Level Agreements): https://www.libreoffice.org/download/libreoffice-in-business/. Also, the work done by ecosystem partners flows back into the LibreOffice project, and this represents an advantage for everyone.

LibreOffice individual users are supported by a global community of volunteers: https://www.libreoffice.org/get-help/community-support/. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help us to make all of these resources available.

Availability of LibreOffice 6.4.6

LibreOffice 6.4.6 is immediately available from the following link: https://www.libreoffice.org/download/. Minimum requirements are specified on the download page. TDF builds of the latest LibreOffice Online source code are available as Docker images: https://hub.docker.com/r/libreoffice/online/.

LibreOffice 6.4.6’s change log pages are available on TDF’s wiki: https://wiki.documentfoundation.org/Releases/6.4.6/RC1 (changed in RC1) and https://wiki.documentfoundation.org/Releases/6.4.6/RC2 (changed in RC2).

All versions of LibreOffice are built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org.

Support LibreOffice

LibreOffice users are invited to join the community at https://ask.libreoffice.org, where they can get and provide user-to-user support. People willing to contribute their time and professional skills to the project can visit the dedicated website at https://whatcanidoforlibreoffice.org.

LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools at https://www.libreoffice.org/donate.

Announcement of LibreOffice 7.0 [Press Releases – The Document Foundation Blog]

LibreOffice 7.0: the new major release of the best FOSS office suite ever is available on all OSes and platforms, and provides significant new features

Berlin, August 5, 2020 – The LibreOffice Project announces the availability of LibreOffice 7.0, a new major release providing significant new features: support for OpenDocument Format (ODF) 1.3; Skia graphics engine and Vulkan GPU-based acceleration for better performance; and carefully improved compatibility with DOCX, XLSX and PPTX files.

  • Support for ODF 1.3. OpenDocument, LibreOffice’s native open and standardised format for office documents, has recently been updated to version 1.3 as an OASIS Technical Committee Specification. The most important new features are digital signatures for documents and OpenPGP-based encryption of XML documents, with improvements in areas such as change tracking, and additional details in the description of elements in first pages, text, numbers and charts. The development of ODF 1.3 features has been funded by donations to The Document Foundation.
  • Skia graphics engine and Vulkan GPU-based acceleration. The Skia graphics engine has been implemented thanks to sponsorship by AMD, and is now the default on Windows, for faster performance. Skia is an open source 2D graphics library which provides common APIs that work across a variety of hardware and software platforms, and can be used for drawing text, shapes and images. Vulkan is a new-generation graphics and compute API with high-efficiency and cross-platform access to modern GPUs.
  • Better compatibility with DOCX, XLSX and PPTX files. DOCX now saves in native 2013/2016/2019 mode, instead of 2007 compatibility mode, to improve interoperability with multiple versions of MS Office, based on the same Microsoft approach. Export to XLSX files with sheet names longer than 31 characters is now possible, along with exporting checkboxes in XLSX. The “invalid content error” message was resolved when opening exported XLSX files with shapes. Finally, there were improvements to the PPTX import/export filter.
    LibreOffice offers the highest level of compatibility in the office suite arena, starting from native support for the OpenDocument Format (ODF) – with better security and interoperability features over proprietary formats – to almost perfect support for DOCX, XLSX and PPTX files. In addition, LibreOffice includes filters for many legacy document formats, and as such is the best interoperability tool in the market.

Summary of Other New Features [1]

GENERAL

  • New icon theme, the default on macOS: Sukapura
  • New shapes galleries: arrows, diagrams, icons and more…
  • Glow and soft edge effects for objects

WRITER

  • Navigator is easier to use, with more context menus
  • Semi-transparent text is now supported
  • Bookmarks can now be displayed in-line in text
  • Padded numbering in lists, for consistency
  • Better handling of quotation marks and apostrophes

CALC

  • New functions for non-volatile random number generation
  • Keyboard shortcut added for autosum

IMPRESS & DRAW

  • Semi-transparent text is supported here too
  • Subscripts now return to the default of 8%
  • PDFs larger than 500 cm can now be generated

LibreOffice Technology

LibreOffice 7.0’s new features have been developed by a large community of code contributors: 74% of commits are from developers employed by companies sitting in the Advisory Board, such as Collabora, Red Hat and CIB, plus several other organizations, and 26% are from individual volunteers.

In addition, there is a global community of individual volunteers taking care of other fundamental activities, such as quality assurance, software localization, user interface design and user experience, editing of help content and documentation, along with free software and open document standards advocacy.

A video summarizing the top new features in LibreOffice 7.0 is available on YouTube: https://www.youtube.com/watch?v=XusjjbBm81s and also on PeerTube: https://tdf.io/lo70peertube

Products based on LibreOffice Technology are available for all major desktop operating systems (Windows, macOS, Linux and ChromeOS), for the cloud and for mobile platforms. They are released by The Document Foundation, and by ecosystem companies contributing to software development.

LibreOffice for End Users

LibreOffice 7.0 represents the bleeding edge in term of features for open source office suites, and as such is targeted at technology enthusiasts, early adopters and power users. The Document Foundation does not provide any technical support for users, although they can get help from other users on mailing lists and the Ask LibreOffice website: https://ask.libreoffice.org

For users whose main objective is personal productivity and therefore prefer a release that has undergone more testing and bug fixing over the new features, The Document Foundation maintains the LibreOffice 6.4 family, which includes some months of back-ported fixes. The current version is LibreOffice 6.4.5.

LibreOffice in Business

For enterprise-class deployments, TDF strongly recommends sourcing LibreOffice from one of the ecosystem partners, to get long-term supported releases, dedicated assistance, custom new features and other benefits, including SLA (Service Level Agreements): https://www.libreoffice.org/download/libreoffice-in-business/. The work done by ecosystem partners is an integral part of LibreOffice Technology.

For migrations from proprietary office suites and training, professional support should be sourced from certified professionals who provide value-added services which extend the reach of the community to the corporate world, and offer CIOs and IT managers a solution in line with proprietary offerings. Reference page: https://www.libreoffice.org/get-help/professional-support/.

In fact, LibreOffice – thanks to its mature codebase, rich feature set, strong support for open standards, excellent compatibility and long-term support options from certified partners – represents the ideal solution for businesses that want to regain control of their data and free themselves from vendor lock-in.

Availability of LibreOffice 7.0

LibreOffice 7.0 is immediately available from the following link: https://www.libreoffice.org/download/. Minimum requirements for proprietary operating systems are Microsoft Windows 7 SP1 and Apple macOS 10.12. Builds of the latest LibreOffice Online source code are available as Docker images from TDF: https://hub.docker.com/r/libreoffice/online/

LibreOffice Technology based products for Android and iOS are listed here: https://www.libreoffice.org/download/android-and-ios/, while for App Stores and ChromeOS are listed here: https://www.libreoffice.org/download/libreoffice-from-microsoft-and-mac-app-stores/

LibreOffice users, free software advocates and community members can support The Document Foundation with a donation at https://www.libreoffice.org/donate

LibreOffice 7.0 is built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org

[1] A more comprehensive list of LibreOffice 7.0 new features is available on the Release Notes wiki page: https://wiki.documentfoundation.org/ReleaseNotes/7.0

Press Kit

The press kit with press release and high-resolution images and screenshots, is available here: https://tdf.io/lo70presskit

Announcement of LibreOffice 6.4.5 [Press Releases – The Document Foundation Blog]

Donate TodayBerlin, July 2, 2020 – The Document Foundation announces the availability of LibreOffice 6.4.5, the 5th minor release of the LibreOffice 6.4 family, targeted at technology enthusiasts and power users. LibreOffice 6.4.5 includes over 100 bug fixes and improvements to document compatibility and interoperability with software from other vendors.

LibreOffice 6.4.5 is optimized for use in production environments, even by more conservative users, as it now includes several months of work on bug fixes. Users of LibreOffice 6.3.6 and previous versions should start planning the update to LibreOffice 6.4.5, as the new major LibreOffice release – tagged 7.0 – is going to be announced in early August.

For enterprise class deployments, TDF strongly recommends sourcing LibreOffice from one of the ecosystem partners, to get long-term supported releases, dedicated assistance, custom new features and other benefits, including SLAs (Service Level Agreements): https://www.libreoffice.org/download/libreoffice-in-business/. Also, the work done by ecosystem partners flows back into the LibreOffice project, and this represents an advantage for everyone.

LibreOffice individual users are supported by a global community of volunteers: https://www.libreoffice.org/get-help/community-support/. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help us to make all of these resources available.

Availability of LibreOffice 6.4.5

LibreOffice 6.4.5 is immediately available from the following link: https://www.libreoffice.org/download/. Minimum requirements are specified on the download page. TDF builds of the latest LibreOffice Online source code are available as Docker images: https://hub.docker.com/r/libreoffice/online/.

LibreOffice 6.4.5’s change log pages are available on TDF’s wiki: https://wiki.documentfoundation.org/Releases/6.4.5/RC1 (changed in RC1) and https://wiki.documentfoundation.org/Releases/6.4.5/RC2 (changed in RC2).

All versions of LibreOffice are built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org.

Support LibreOffice

LibreOffice users are invited to join the community at https://ask.libreoffice.org, where they can get and provide user-to-user support. People willing to contribute their time and professional skills to the project can visit the dedicated website at https://whatcanidoforlibreoffice.org.

LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools at https://www.libreoffice.org/donate.

LibreOffice 6.4.4 available for download [Press Releases – The Document Foundation Blog]

Berlin, May 21, 2020 – The Document Foundation announces the availability of LibreOffice 6.4.4, the 4th minor release of the LibreOffice 6.4 family, targeted at technology enthusiasts and power users. LibreOffice 6.4.4 includes many bug fixes and improvements to document compatibility.

LibreOffice 6.4.4 represents the bleeding edge in term of features for open source office suites, and as such is not optimized for enterprise-class deployments, where features are less important than robustness. Users wanting a more mature version can download LibreOffice 6.3.6, which includes some months of back-ported fixes.

For enterprise class deployments, TDF strongly recommends sourcing LibreOffice from one of the ecosystem partners, to get long-term supported releases, dedicated assistance, custom new features and other benefits, including SLAs (Service Level Agreements): https://www.libreoffice.org/download/libreoffice-in-business/. Also, the work done by ecosystem partners flows back into the LibreOffice project, benefiting everyone.

LibreOffice’s individual users are helped by a global community of volunteers: https://www.libreoffice.org/get-help/community-support/. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help us to make all of these resources available.

Availability of LibreOffice 6.4.4

LibreOffice 6.4.4 is immediately available from the following link: https://www.libreoffice.org/download/. Minimum requirements are specified on the download page. TDF builds of the latest LibreOffice Online source code are available as Docker images: https://hub.docker.com/r/libreoffice/online/.

LibreOffice 6.4.4’s change log pages are available on TDF’s wiki: https://wiki.documentfoundation.org/Releases/6.4.4/RC1 (changed in RC1) and https://wiki.documentfoundation.org/Releases/6.4.4/RC2 (changed in RC2).

All versions of LibreOffice are built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org.

Support LibreOffice

LibreOffice users are invited to join the community at https://ask.libreoffice.org, where they can get and provide user-to-user support. People willing to contribute their time and professional skills to the project can visit the dedicated website at https://whatcanidoforlibreoffice.org.

LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools at https://www.libreoffice.org/donate.

LibreOffice 6.3.6 available for download [Press Releases – The Document Foundation Blog]

Berlin, April 30, 2020 – The Document Foundation announces LibreOffice 6.3.6, the last minor release of the LibreOffice 6.3 family, targeted at organizations and individuals using the software in production environments, who are invited to update their current version. The new release provides bug and regression fixes, and improvements to document compatibility.

LibreOffice 6.3.6’s change log pages are available on TDF’s wiki: https://wiki.documentfoundation.org/Releases/6.3.6/RC1 (changed in RC1) and https://wiki.documentfoundation.org/Releases/6.3.6/RC2 (changed in RC2).

LibreOffice’s individual users are helped by a global community of volunteers: https://www.libreoffice.org/get-help/community-support/. On the website and the wiki there are guides, manuals, tutorials and HowTos. Donations help us to make all of these resources available.

LibreOffice in business

For enterprise class deployments, TDF strongly recommend sourcing LibreOffice from one of the ecosystem partners to get long-term supported releases, dedicated assistance, custom new features and other benefits, including SLA (Service Level Agreements). Also, the work done by ecosystem partners flows back into the LibreOffice project, benefiting everyone.

Also, support for migrations and trainings should be sourced from certified professionals who provide value-added services which extend the reach of the community to the corporate world and offer CIOs and IT managers a solution in line with proprietary offerings.

In fact, LibreOffice – thanks to its mature codebase, rich feature set, strong support for open standards, excellent compatibility and long-term support options from certified partners – represents the ideal solution for businesses that want to regain control of their data and free themselves from vendor lock-in.

Availability of LibreOffice 6.3.6

LibreOffice 6.3.6 is immediately available from the following link: https://www.libreoffice.org/download/. Minimum requirements are specified on the download page. TDF builds of the latest LibreOffice Online source code are available as Docker images: https://hub.docker.com/r/libreoffice/online/.

LibreOffice Online is fundamentally a server-based platform, and should be installed and configured by adding cloud storage and an SSL certificate. It might be considered an enabling technology for the cloud services offered by ISPs or the private cloud of enterprises and large organizations.

All versions of LibreOffice are built with document conversion libraries from the Document Liberation Project: https://www.documentliberation.org.

Support LibreOffice

LibreOffice users are invited to join the community at https://ask.libreoffice.org, where they can get and provide user-to-user support. People willing to contribute their time and professional skills to the project can visit the dedicated website at https://whatcanidoforlibreoffice.org.

LibreOffice users, free software advocates and community members can provide financial support to The Document Foundation with a donation via PayPal, credit card or other tools at https://www.libreoffice.org/donate.

20-02-2020

12:26

IDG neemt afscheid van Webwereld [Laatste Artikelen - Webwereld]

IDG heeft een strategische koerswijziging ingezet om in de Benelux uitsluitend verder te gaan met de zakelijke titels CIO en Computerworld. Vanaf 1 maart zal deze content bovendien verplaatsen naar de global sites van computerworld.com en cio.com, waar IDG de Benelux regio zal bedienen met zowel Nederlandstalige als Engelstalige content.

18-02-2020

21:55

Samsung Galaxy Z Flip, S20(+) en S20 Ultra Hands-on [Laatste Artikelen - Webwereld]

Samsung nodigde ons uit op de drie allernieuwste smartphones van dichtbij te bekijken. Daar maakten wij dankbaar gebruik van en wij delen onze bevindingen met je.

02-02-2020

21:29

Hands-on: Synology Virtual Machine Manager [Laatste Artikelen - Webwereld]

Dat je NAS tegenwoordig voor veel meer dan alleen het opslaan van bestanden kan worden gebruikt is inmiddels bekend, maar wist je ook dat je er virtuele machines mee kan beheren? Wij leggen je uit hoe.

23-01-2020

16:42

Wat je moet weten over FIDO-sleutels [Laatste Artikelen - Webwereld]

Dankzij de FIDO2-standaard is het mogelijk om zonder wachtwoord toch veilig in te loggen bij diverse online diensten. Onder meer Microsoft en Google bieden hier al opties voor. Dit jaar volgen er waarschijnlijk meer organisaties die dit aanbieden.

Zo gebruik je je iPhone zonder Apple ID [Laatste Artikelen - Webwereld]

Tegenwoordig moet je voor zo’n beetje alles wat je online wilt doen een account aanmaken, zelfs als je niet van plan bent online te werken of als je gewoon geen zin hebt om je gegevens te delen met de fabrikant. Wij laten je vandaag zien hoe je dat voor elkaar krijgt met je iPhone of iPad.

Groot lek in Internet Explorer wordt al misbruikt in het wild [Laatste Artikelen - Webwereld]

Er is een nieuwe zero-day-kwetsbaarheid ontdekt in Microsoft Internet Explorer. Het nieuwe lek wordt al misbruikt en een beveiligingsupdate is nog niet beschikbaar.

Zo installeer je Chrome-extensies in de nieuwe Edge [Laatste Artikelen - Webwereld]

De nieuwe versie van Edge is gebouwd met code van het Chromium-project, maar in de standaardconfiguratie worden extensies uitsluitend geïnstalleerd via de Microsoft Store. Dat is gelukkig vrij eenvoudig aan te passen.

19-01-2020

12:59

Windows 10-upgrade nog steeds gratis [Laatste Artikelen - Webwereld]

Microsoft gaf gebruikers enkele jaren geleden de mogelijkheid gratis te upgraden van Windows 7 naar Windows 10. Daarbij ging het af en toe zo ver dat zelfs gebruikers die dat niet wilden een upgrade kregen. De aanbieding is al lang en breed voorbij, maar gratis upgraden is nog steeds mogelijk en het is nu makkelijker dan ooit. Wij vertellen je hoe je dat doet.

Chrome, Edge, Firefox: Welke browser is het snelst? [Laatste Artikelen - Webwereld]

Er is veel veranderd op de markt voor pc-browsers. Ongeveer vijf jaar geleden was er nog meer concurrentie en geheel eigen ontwikkeling, nu zijn er nog maar twee engines over: die achter Chrome en die achter Firefox. Met de release van de Blink-gebaseerde Edge van Microsoft deze maand kijken we naar benachmarks en praktijktests.

Cooler Master herontwerpt koelpasta-tubes wegens drugsverdenkingen [Laatste Artikelen - Webwereld]

Cooler Master heeft het uiterlijk van z’n koelpasta-spuiten aangepast omdat het bedrijf het naar eigen zeggen beu is om steeds te moeten uitleggen aan ouders dat de inhoud geen drugs is, maar koelpasta.

11-05-2019

18:55

Two Super Fast App Launchers for Ubuntu 19.04 [Tech Drive-in]

During the transition period, when GNOME Shell and Unity were pretty rough around the edges and slow to respond, 3rd party app launchers were a big deal. Overtime the newer desktop environments improved and became fast, reliable and predictable, reducing the need for a alternate app launchers.


As a result, many third-party app launchers have either slowed down development or simply seized to exist. Ulauncher seems to be the only one to have bucked the trend so far. Synpase and Kupfer on the other hand, though old and not as actively developed anymore, still pack a punch. Since Kupfer is too old school, we'll only be discussing Synapse and Ulauncher here.

Synapse

I still remember the excitement when I first reviewed Synapse more than 8 years ago. Back then, Synapse was something very unique to Linux and Ubuntu, and it still is in many ways. Though Synapse is not an active project that it used to be, the launcher still works great even in brand new Ubuntu 19.04.

synapse ubuntu 19.04
 
No need to meddle with PPAs and DEBs, Synapse is available in Ubuntu Software Center.

ulauncher ubuntu 19.04 disco
 
CLICK HERE to directly find and install Synapse from Ubuntu Software Center, or simply search 'Synapse' in USC. Launch the app afterwards. Once launched, you can trigger Synapse with Ctrl+Space keyboard shortcut.

Ulauncher

The new kid in the block apparently. But new doesn't mean it is lacking in any way. What makes Ulauncher quite unique are its extensions. And there is plenty to choose from.

ulauncher ubuntu 19.04

From an extension that lets you control your Spotify desktop app, to generic unit converters or simply timers, Ulauncher extesions has got you covered.

Let's install the app first. Download the DEB file for Debian/Ubuntu users and double-click the downloaded file to install it. To complete the installation via Terminal instead, do this:

OR

sudo dpkg -i ~/Downloads/ulauncher_4.3.2.r8_all.deb

Change filename/location if they are different in your case. And if the command reports dependency errors, make a force install using the command below.

sudo apt-get install -f

Done. Post install, launch the app from your app-list and you're good to go. Once started, Ulauncher will sit in your system tray by default. And just like Synapse, Ctrl+Space will trigger Ulauncher.


Installing extensions in Ulauncher is pretty straight forward too.


Find the extensions you want from Ulauncher Extensions page. Trigger a Ulauncher instance with Ctrl+Space and go to Settings > Extensions > Add extension. Provide the URL from the extension page and let the app do the rest.

29-04-2019

17:40

A Standalone Video Player for Netflix, YouTube, Twitch on Ubuntu 19.04 [Tech Drive-in]

Snap apps are a godsend. ElectronPlayer is an Electron based app available on Snapstore that doubles up as a standalone media player for video streaming services such as Netflix, YouTube, Twitch, Floatplane etc.

And it works great on Ubuntu 19.04 "disco dingo". From what we've tested, Netflix works like a charm, so does YouTube. ElectronPlayer also has a picture-in-picture mode that let it run above desktop and full screen applications.

netflix player ubuntu 19.04

For me, this is great because I can free-up tabs on my Firefox window which are almost never clutter-free.
OR

Use the command below to install ElectronPlayer directly from Snapstore. Open Terminal (Ctrl+Alt+t) and copy:

sudo snap install electronplayer

Press ENTER and give password when asked.

After the process is complete, search for ElectronPlayer in you App list. Sign in to your favorite video streaming services and you are good to go. Let us know your feedback in the comments.

22-04-2019

19:07

Howto Upgrade to Ubuntu 19.04 from Ubuntu 18.10, Ubuntu 18.04 LTS [Tech Drive-in]

As most of you should know already, Ubuntu 19.04 "disco dingo" has been released. A lot of things have changed, see our comprehensive list of improvements in Ubuntu 19.04. Though it is not really necessary to make the jump, I'm sure many here would prefer to have the latest and greatest from Ubuntu. Here's how you upgrade to Ubuntu 19.04 from Ubuntu 18.10 and Ubuntu 18.04.

Upgrading to Ubuntu 19.04 from Ubuntu 18.04 LTS is tricky. There is no way you can make the jump from Ubuntu 18.04 LTS directly to Ubuntu 19.04. For that, you need to upgrade to Ubuntu 18.10 first. Pretty disappointing, I know. But when upgrading an entire OS, you can't be too careful.

And the process itself is not as tedious or time consuming à la Windows. And also unlike Windows, the upgrades are not forced upon you while you're in middle of something.

how to upgrade to ubuntu 19.04

If you wonder how the dock in the above screenshot rest at the bottom of Ubuntu desktop, it's called dash-to-dock GNOME Shell extension. That and more Ubuntu 19.04 tips and tricks here.

Upgrade to Ubuntu 19.04 from Ubuntu 18.10

Disclaimer: PLEASE backup your critical data before starting the upgrade process.

Let's start with the assumption that you're on Ubuntu 18.04 LTS.

After running the upgrade from Ubuntu 18.04 LTS from Ubuntu 18.10, the prompt will ask for a full system reboot. Please do that, and make sure everything is running smoothly afterwards. Now you have clean new Ubuntu 18.10 up and running. Let's begin the Ubuntu 19.04 upgrade process.
  • Make sure your laptop is plugged-in, this is going to take time. Stable Internet connection is a must too. 
  • Run your Software Updater app, and install all the updates available. 
how to upgrade to ubuntu 19.04 from ubuntu 18.10

  • Post the update, you should be prompted with an "Ubuntu 19.04 is available" window. It will guide you through the required steps without much hassle. 
  • If not, fire up Software & Updates app and check for updates. 
  • If both these didn't work in your case, there's always the commandline option to make the force upgarde. Open Terminal app (keyboard shortcut: CTRL+ALT+T), and run the command below.
sudo do-release-upgrade -d
  • Type the password when prompted. Don't let the simplicity of the command fool you, this is just the start of a long and complicated process. do-release command will check for available upgrades and then give you an estimated time and bandwidth required to complete the process. 
  • Read the instructions carefully and proceed. The process only takes about an hour or less for me. It entirely depends on your internet speed and system resources.
So, how did it go? Was the upgrade process smooth as it should be? And what do you think about new Ubuntu 19.04 "disco dingo"? Let us know in the comments.

20-04-2019

15:37

Ubuntu 19.04 Updates - 7 Things to Know [Tech Drive-in]

Ubuntu 19.04 is scheduled to arrive in another 30 days has been released. I've been using it for the past week or so, and even as a pre-beta, the OS is pretty stable and not buggy at all. Here are a bunch of things you should know about the yet to be officially released Ubuntu 19.04.

what's new in ubuntu 19.04

1. Codename: "Disco Dingo"

How about that! As most of you know already, Canonical names its semiannual Ubuntu releases using an adjective and an animal with the same first letter (Intrepid Ibex, Feisty Fawn, or Maverick Meerkat, for example, were some of my favourites). And the upcoming Ubuntu 19.04 is codenamed "Disco Dingo", has to be one of the coolest codenames ever for an OS.


2. Ubuntu 19.04 Theme Updates

A new cleaner, crisper looking Ubuntu is coming your way. Can you notice the subtle changes to the default Ubuntu theme in screenshot below? Like the new deep-black top panel and launcher? Very tastefully done.

what's new in ubuntu 19.04

To be sure, this is now looking more and more like vanilla GNOME and less like Unity, which is not a bad thing.

ubuntu 19.04 updates

There are changes to the icons too. That hideous blue Trash icon is gone. Others include a new Update Manager icon, Ubuntu Software Center icon and Settings Icon.

3. Ubuntu 19.04 Official Mascot

GIFs speaks louder that words. Meet the official "Disco Dingo" mascot.



Pretty awesome, right? "Disco Dingo" mascot calls for infinite wallpaper variations.

4. The New Default Wallpaper

The new "Disco Dingo" themed wallpaper is so sweet: very Ubuntu-ish yet unique. A gray scale version of the same wallpaper is a system default too.

ubuntu 19.04 disco dingo features

UPDATE: There's a entire suit of newer and better wallpapers on Ubuntu 19.04!

5. Linux Kernel 5.0 Support

Ubuntu 19.04 "Disco Dingo" will officially support the recently released Linux Kernel version 5.0. Among other things, Linux Kernel 5.0 comes with AMD FreeSync display support which is awesome news to users of high-end AMD Radeon graphics cards.

ubuntu 19.04 features

Also important to note is the added support for Adiantum Data Encryption and Raspberry Pi touchscreens. Apart from that, Kernel 5.0 has regular CPU performance improvements and improved hardware support.

6. Livepatch is ON

Ubuntu 19.04's 'Software and Updates' app has a new default tab called Livepatch. This new feature should ideally help you to apply critical kernel patches without rebooting.

Livepatch may not mean much to a normal user who regularly powerdowns his or her computer, but can be very useful for enterprise users where any downtime is simply not acceptable.

ubuntu 19.04 updates

Canonical introduced this feature in Ubuntu 18.04 LTS, but was later removed when Ubuntu 18.10 was released. The Livepatch feature is disabled on my Ubuntu 19.04 installation though, with a "Livepatch is not available for this system" warning. Not exactly sure what that means. Will update.

7. Ubuntu 19.04 Release Schedule

The beta freeze is scheduled to happen on March 28th and final release on April 18th.

ubuntu 19.04 what's new

Normally, post the beta release, it is a safe to install Ubuntu 19.04 for normal everyday use in my opinion, but ONLY if you are inclined to give it a spin before everyone else of course. I'd never recommend a pre-release OS on production machines. Ubuntu 19.04 Daily Build Download.


My biggest disappointment though is the supposed Ubuntu Software Center revamp which is now confirmed to not make it to this release. Subscribe us on Twitter and Facebook for more Ubuntu 19.04 release updates.

ubuntu 19.04 disco dingo

Recommended read: Top things to do after installing Ubuntu 19.04

13-04-2019

15:47

LinuxBoot: A Linux Foundation Project to replace UEFI Components [Tech Drive-in]

UEFI has a pretty bad reputation among many in the Linux community. UEFI unnecessarily complicated Linux installation and distro-hopping in Windows pre-installed machines, for example. Linux Boot project by Linux Foundation aims to replace some firmware functionality like the UEFI DXE phase with Linux components.

What is UEFI?
UEFI is a standard or a specification that replaced legacy BIOS firmware, which was the industry standard for decades. Essentially, UEFI defines the software components between operating system and platform firmware.


UEFI boot has three phases: SEC, PEI and DXE. Driver eXecution Environment or DXE Phase in short: this is where UEFI system loads drivers for configured devices. LinuxBoot will replaces specific firmware functionality like the UEFI DXE phase with a Linux kernel and runtime.

LinuxBoot and the Future of System Startup
"Firmware has always had a simple purpose: to boot the OS. Achieving that has become much more difficult due to increasing complexity of both hardware and deployment. Firmware often must set up many components in the system, interface with more varieties of boot media, including high-speed storage and networking interfaces, and support advanced protocols and security features."  writes Linux Foundation.

linuxboot uefi replacement

LinuxBoot will replace this slow and often error-prone code with a Linux Kernel. This alone should significantly improve system startup performance.

On top of that, LinuxBoot intends to achieve increased boot reliability and boot-time performance by removing unnecessary code and by using reliable Linux drivers instead of lightly tested firmware drivers. LinuxBoot claims that these improvements could potentially help make the system startup process as much as 20 times faster.

In fact, this "Linux to boot Linux" technique has been fairly common place in supercomputers, consumer electronics, and military applications, for decades. LinuxBoot looks to take this proven technique and improve on it so that it can be deployed and used more widely by individual users and companies.

Current Status
LinuxBoot is not as obscure or far-fetched as, say, lowRISC (open-source, Linux capable, SoC) or even OpenPilot. At FOSDEM 2019 summit, Facebook engineers revealed that their company is actively integrating and finetuning LinuxBoot to their needs for freeing hardware down to the lowest levels.


Facebook and Google are deeply involved in LinuxBoot project. Being large data companies, where even small improvements in system startup speed and reliability can bring major advantages, their involvement is not a surprise. To put this in perspective, a large data center run by Google or Facebook can have tens of thousands of servers. Other companies involved include Horizon Computing, Two Sigma and 9elements Cyber Security.

03-04-2019

20:18

Ubuntu 19.04 Gets Newer and Better Wallpapers [Tech Drive-in]

A "Disco Dingo" themed wallpaper was already there. But the latest update bring a bunch of new wallpapers as system defaults on Ubuntu 19.04.

ubuntu 19.04 wallpaper

Pretty right? Here's the older one for comparison.

ubuntu 19.04 updates

The newer wallpaper is definitely cleaner, more professional looking with better colors. I won't bother tinkering with wallpapers anymore, the new default on Ubuntu 19.04 is just perfect.

ubuntu 19.04 wallpapers

Too funky for my taste. But I'm sure there will be many who will prefer this darker, edgier, wallpaper over the others. As we said earlier, the new "disco dingo" mascot calls for infinite wallpaper variations.


Apart from theme and artwork updates, Ubuntu 19.04 has the latest Linux Kernel version 5.0 (5.0.0.8 to be precise). You can read more about Ubuntu 19.04 features and updates here.

Ubuntu 19.04 hit beta a few days ago. Though it is a pretty stable release already for a beta, I'd recommend to wait for another 15 days or so until the final release. If all you care are the wallpapers, you can download the new Ubuntu 19.04 wallpapers here. It's a DEB file, just do a double click post download.

27-03-2019

19:06

UBports Installer for Ubuntu Touch is just too good! [Tech Drive-in]

Even as someone who bought into the Ubuntu Touch hype very early, I was not expecting much from UBports to be honest. But to my pleasent surprise, UBports Installer turned my 4 year old BQ Aquaris E4.5 Ubuntu Edition hardware into a slick, clean, and usable phone again.



ubuntu phone 16.04
UBports Installer and Ubuntu Touch
As many of you know already, Ubuntu Touch was Canonical's failed attempt to deliver a competent mobile operating system based on its desktop version. The first Ubuntu Touch installed smartphone was released in 2015 by BQ, a Spanish smartphone manufacturer. And in April 2016, the world's first Ubuntu Touch based tablet, the BQ Aquaris M10 Ubuntu Edition, was released.

Though initial response was  quite promising, Ubuntu Touch failed to make a significant enough splash in the smartphone space. In fact, Ubuntu Touch was not alone, many other mobile OS projects like Firefox OS or even Samsung owned Tizen OS for that matter failed to capture a sizable market-share from Android/iOS duopoly.

To the disappointment of Ubuntu enthusiasts, Mark Shuttleworth announced the termination of Ubuntu Touch development in April, 2017.


Rise of UBports and revival of Ubuntu Touch Project
ubuntu touch 16.04For all its inadequacies, Ubuntu Touch was one unique OS. It looked and felt different from most other mobile operating systems. And Ubuntu Touch enthusiasts was not ready to give up on it so easily. Enter UBports.

UBports turned Ubuntu Touch into a community-driven project. Passionate people from around the world now contribute to the development of Ubuntu Touch. In August 2018, UBPorts released its OTA-4, upgrading the Ubuntu Touch's base from the Canonical's starting Ubuntu 15.04 (Vivid Vervet) to the nearest, current long-term support version Ubuntu 16.04 LTS.

They actively test the OS on a number of legacy smartphone hardware and help people install Ubuntu Touch on their smartphones using an incredibly capable, cross-platform, installer.

Ubuntu Touch Installer on Ubuntu 19.04
Though I knew about UBports project before, I was never motivated enough to try the new OS on my Aquaris E4.5, until yesterday. By sheer stroke of luck, I stumbled upon UBports Installer in Ubuntu Software Center. I was curious to find out if it really worked as it claimed on the page.

ubuntu touch installer on ubuntu 19.04

I fired up the app on my Ubuntu 19.04 and plugged in my Aquaris E4.5. Voila! the installer detected my phone in a jiffy. Since there wasn't much data on my BQ, I proceeded with Ubuntu Touch installation.

ubports ubuntu touch installer

The instructions were pretty straight forward and it took probably 15 minutes to download, restart, and install, 16.04 LTS based Ubuntu Touch on my 4 year old hardware.

ubuntu touch ubports

In my experience, even flashing an Android was never this easy! My Ubuntu phone is usable again without all the unnecessary bloat that made it clunky. This post is a tribute to the UBports community for the amazing work they've been doing with Ubuntu Touch. Here's also a list of smartphone hardware that can run Ubuntu Touch.

21-03-2019

19:27

Google's Stadia Cloud Gaming Service, Powered by Linux [Tech Drive-in]

Unless you live under a rock, you must've been inundated with nonstop news about Google's high-octane launch ceremony yesterday where they unveiled the much hyped game streaming platform called Stadia.

Stadia, or Project Stream as it was earlier called, is a cloud gaming service where the games themselves are hosted on Google's servers, while the visual feedback from the game is streamed to the player's device through Google Chrome. If this technology catches on, and if it works just as good as showed in the demos, Stadia could be what the future of gaming might look like.

Stadia, Powered by Linux

It is a fairly common knowledge that Google data centers use Linux rather extensively. So it is not really surprising that Google would use Linux to power its cloud based Stadia gaming service. 

google stadia runs on linux

Stadia's architecture is built on Google data center network which has extensive presence across the planet. With Google Stadia, Google is offering a virtual platform where processing resources can be scaled up to match your gaming needs without the end user ever spending a dime more on hardware.


And since Google data centers mostly runs on Linux, the games on Stadia will run on Linux too, through the Vulkan API. This is great news for gaming on Linux. Even if Stadia doesn't directly result in more games on Linux, it could potentially make gaming a platform agnostic cloud based service, like Netflix.

With Stadia, "the data center is your platform," claims Majd Bakar, head of engineering at Stadia. Stadia is not constrained by limitations of traditional console systems, he adds. Stadia is a "truly flexible, scalable, and modern platform" that takes into account the future requirements of the gaming ecosystem. When launched later this year, Stadia will be able to stream at 4K HDR and 60fps with surround sound.


Watch the full presentation here. Tell us what you think about Stadia in the comments.

13-03-2019

16:43

Purism: A Linux OS is talking Convergence again [Tech Drive-in]

The hype around "convergence" just won't die it seems. We have heard it from Ubuntu a lot, KDE, even from Google and Apple in fact. But the dream of true convergence, a uniform OS experience across platforms, never really materialised. Even behemoths like Apple and Googled failed to pull it off with their Android/iOS duopoly. Purism's Debian based PureOS wants to change all that for good.

pure os linux

Purism, PureOS, and the future of Convergence

Purism, a computer technology company based out of California, shot to fame for its Librem series of privacy and security focused laptops and smartphones. Purism raised over half a million dollars through a Crowd Supply crowdfunding campaign for its laptop hardware back in 2015. And unlike many crowdfunding megahits which later turned out to be duds, Purism delivered on its promises big time.


Later in 2017, Purism surprised everyone again with their successful crowdfunding campaign for its Linux based opensource smartphone, dubbed Librem 5. The campaign raised over $2.6 million and surpassed its 1.5 million crowdfunding goal in just in two weeks. Purism's Librem 5 smartphones will start shipping late 2019.

Librem, which loosely refers to free and opensource software, was the brand name chosen by Purism for its laptops/smartphones. One of the biggest USPs of Purism devices is the hardware kill switches that it comes loaded with, which physically disconnects phone's camera, WiFi, Bluetooth, and mobile broadband modem.

Meet PureOS, Purism's Debian Based Linux OS

PureOS is a free and opensource, Debian based Linux distribution which runs on all Librem hardware including its smartphones. PureOS is endorsed by Free Software Foundation. 

purism os linux

The term convergence in computer speak, refers to applications that can work seamlessly across platforms, and bring a consistent look and feel and similar functionality on your smartphone and your computer. 
"Purism is beating the duopoly to that dream, with PureOS: we are now announcing that Purism’s PureOS is convergent, and has laid the foundation for all future applications to run on both the Librem 5 phone and Librem laptops, from the same PureOS release", announced Jeremiah Foster, the PureOS director at Purism (by duopoly, he was referring to Android/iOS platforms that dominate smartphone OS ecosystem).
Ideally, convergence should be able to help app developers and users all at the same time. App developers should be able to write their app once, testing it once and running it everywhere. And users should be able to seamlessly use, connect and sync apps across devices and platforms.

Easier said than done though. As Jeremiah Foster himself explains:
"it turns out that this is really hard to do unless you have complete control of software source code and access to hardware itself. Even then, there is a catch; you need to compile software for both the phone’s CPU and the laptop CPU which are usually different architectures. This is a complex process that often reveals assumptions made in software development but it shows that to build a truly convergent device you need to design for convergence from the beginning."

How PureOS is achieving convergence?

PureOS have had a distinct advantage when it comes to convergence. Purism is a hardware maker that also designs its platforms and software. From its inception, Purism has been working on a "universal operating system" that can run on different CPU architectures.

librem opensource phone

"By basing PureOS on a solid, foundational operating system – one that has been solving this performance and run-everywhere problem for years – means there is a large set of packaged software that 'just works' on many different types of CPUs."

The second big factor is "adaptive design", software apps that can adapt for desktop or mobile easily, just like a modern website with responsive deisgn.


"Purism is hard at work on creating adaptive GNOME apps – and the community is joining this effort as well – apps that look great, and work great, both on a phone and on a laptop".

Purism has also developed an adaptive presentation library for GTK+ and GNOME, called libhandy, which the third party app developers can use to contribute to Purism's convergence ecosystem. Still under active development, libhandy is already packaged into PureOS and Debian.

Florida based Startup Builds Ubuntu Powered Aerial Robotics [Tech Drive-in]

Apellix is a Florida based startup that specialises in aerial robotics. They intend to create safer work environments by replacing workers with its task-specific drones to complete high-risk jobs at dangerous/elevated work sites.

ubuntu robotics

Robotics with an Ubuntu Twist

Ubuntu is expanding its reach into robotics and IoT in a big way. A few years ago at the TechCrunch Disrupt event, UAVIA unveiled a new generation of its one hundred percent remotely operable drones (an industry first, they claimed), which were built with Ubuntu under the hood. Then there were other like Erle Robotics (recently renamed to Acutronic Robotics) which made big strides in drone technology using Ubuntu at its core.


Apellix is the only aerial robotics company with drones "capable of making contact with structures through fully computer-controlled flight", claims Robert Dahlstrom, Founder and CEO of Apellix.

"At height, a human pilot cannot accurately gauge distance. At 45m off the ground, they can’t tell if they are 8cm or 80cm away from the structure. With our solutions, an engineer simply positions the drone near the inspection site, then the on-board computer takes over and automates the delicate docking process." He adds.


Apellix considered many popular Linux distributions before zeroing in on Ubuntu for its stability, reliability, and large developer ecosystem. Ubuntu's versatility also enabled Apellix to use the same underlying OS platform and software packages across development and production.

The team is currently developing on Ubuntu Server with the intent to migrate to Ubuntu Core. The company is also making extensive use of Ubuntu Server, both on-board its robotic systems and its cloud operations, according to a case study by Ubuntu's parent company, Canonical Foundation. 

apellix ubuntu drones

"With our aircraft, an error of 2.5 cm could be the difference between a successful flight and a crash," comments Dahlstrom. "Software is core to avoiding those errors and allowing us to do what we do - so we knew that placing the right OS at the heart of our solutions was essential." 

Openpilot: An Opensource Alternative to Tesla Autopilot, GM Super Cruise [Tech Drive-in]

Openpilot is an opensource driving agent which at the moment can perform industry-standard functions such as Adaptive Cruise Control and Lane Keeping Assist System for a select few auto manufacturers.


opensource autopilot system

Meet Project Openpilot

Opensource isn't a misnomer in the world of autonomous cars. Even as far back as in 2013, Ubuntu was spotted in Mercedes-Benz driverless cars, and it is also a well-known fact that Google is using a 'lightly customized Ubuntu' at the core of its push towards building fully autonomous cars. 

Openpilot though is unique in its own way. It's an opensource driving agent that already works (as is claimed) in a number of models from manufacturers such as Toyota, Kia, Honda, Chevrolet, Hyundai, Jeep, etc.


Above image: An Openpilot user getting a distracted alert. Apart from Adaptive Cruise Control (ACC) and Lane Keeping Assist System functions, Openpilot developers claims that their technology currently is "about on par with Tesla Autopilot and GM Super Cruise, and better than all other manufacturers."

If Tesla's Autopilot was iOS, Openpilot developers would like their product to become the "Android for cars", the ubiquitous software of choice when autonomous systems on cars goes universal.



The Openpilot-endorsed, officially supported list of cars keeps growing. It now includes some 40 odd models from manufacturers ranging from Toyota to Hyundai. And they are actively testing Openpilot on newer cars from VW, Subaru etc. according to their Twitter feed.

Even a lower variant of Tesla Model S which came without Tesla Autopilot system was upgraded with comma.ai's Openpilot solution which then mimicked a number of features from Tesla Autopilot, including automatic steering in highways according to this article. (comma.ai is the startup behind Openpilot)

Related read: Udacity's attempts to build a fully opensource self-driving car, and Linux Foundation's Automotive Grade Linux (AGL) infotainment system project which Toyota intends to use in its future cars.

RIOT OS: A tiny Opensource OS for the 'Internet of Things' (IoT) [Tech Drive-in]

"RIOT powers the Internet of Things like Linux powers the Internet." RIOT is a small, free and opensource operating system for the memory constrained, low power wireless IoT devices.


RIOT OS: A tiny OS for embedded systems

Initially developed by Freie Universität Berlin (FU Berlin), INRIA institute and HAW Hamburg, Riot OS has evolved over the years into a very competent alternative to TinyOS, Contiki etc. and now supports application programming with programming languages such as C and C++, and provides full multithreading and real-time capabilities. RIOT can run on 8-bit, 16-bit and 32-bit ARM Cortex processors.


RIOT is opensource, has its source code published on GitHub, and is based on a microkernel architecture (the bare minimum software required to implement an operating system). RIOT OS vs competition:

riot os for IoT

More information on RIOT OS can be found here. RIOT summits are held annually in major cities of Europe, if you are interested pin this up. Thank you for reading.

30-10-2018

12:29

IBM, the 6th biggest contributor to Linux Kernel, acquires RedHat for $34 Billion [Tech Drive-in]

The $34 billion all cash deal to purchase opensource pioneer Red Hat is IBM's biggest ever acquisition by far. The deal will give IBM a major foothold in fast-growing cloud computing market and the combined entity could give stiff competition to Amazon's cloud computing platform, AWS. But what about Red Hat and its future?

ibm-redhat

Another Oracle - Sun Micorsystems deal in the making? 
The alarmists among us might be quick to compare the IBM - Red Hat deal with the decade old deal between Oracle Corporation and Sun Microsystems, which was then a major player in opensource software scene.

But fear not. Unlike Oracle (which killed off Sun's OpenSolaris OS almost immediately after acquisition and even started a patent war against Android using Sun's Java patents), IBM is already a major contributor to opensource software including the mighty Linux Kernel. In fact, IBM was the 6th biggest contributor to Linux kernel in 2017.

What's in it for IBM?
With the acquisition of Red Hat, IBM becomes the world's #1 hybrid cloud provider, "offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses", according to Ginni Rometty, IBM Chairman, President and CEO. She adds:

“Most companies today are only 20 percent along their cloud journey, renting compute power to cut costs. The next 80 percent is about unlocking real business value and driving growth. This is the next chapter of the cloud. It requires shifting business applications to hybrid cloud, extracting more data and optimizing every part of the business, from supply chains to sales.”

The Future of Red Hat
The Red Hat story is almost as old as Linux itself. Founded in 1993, RedHat's growth was phenomenal. Over the next two decades Red Hat went on to establish itself as the premier Linux company, and Red Hat OS was the enterprise Linux operating system of choice. It set the benchmark for others like Ubuntu, openSUSE and CentOS to follow. Red Hat is currently the second largest corporate contributor to the Linux kernel after Intel (Intel really stepped-up its Linux Kernel contributions post-2013).

Regular users might be more familiar with Fedora Project, a more user-friendly operating system maintained by Red Hat that competes with mainstream, non-enterprise operating systems like Ubuntu, elementary OS, Linux Mint or even Windows 10 for that matter. Will Red Hat be able to stay independent post acquisition?

According to the official press release, "IBM will remain committed to Red Hat’s open governance, open source contributions, participation in the open source community and development model, and fostering its widespread developer ecosystem. In addition, IBM and Red Hat will remain committed to the continued freedom of open source, via such efforts as Patent Promise, GPL Cooperation Commitment, the Open Invention Network and the LOT Network." Well, that's a huge relief.

In fact, IBM and Red Hat has been partnering each other for over 20 years, with IBM serving as an early supporter of Linux, collaborating with Red Hat to help develop and grow enterprise-grade Linux. And as IBM CEO mentioned, the acquisition is more of an evolution of the long-standing partnership between the two companies.
"Open source is the default choice for modern IT solutions, and I’m incredibly proud of the role Red Hat has played in making that a reality in the enterprise,” said Jim Whitehurst, President and CEO, Red Hat. “Joining forces with IBM will provide us with a greater level of scale, resources and capabilities to accelerate the impact of open source as the basis for digital transformation and bring Red Hat to an even wider audience – all while preserving our unique culture and unwavering commitment to open source innovation."
Predicting the future can be tricky. A lot of things can go wrong. But one thing is sure, the acquisition of Red Hat by IBM is nothing like the Oracle - Sun deal. Between them, IBM and Red Hat must have contributed more to the open source community than any other organization.

20-09-2018

20:06

Meet 'Project Fusion': An Attempt to Integrate Tor into Firefox [Tech Drive-in]

A real private mode in Firefox? A Tor integrated Firefox could just be that. Tor Project is currently working with Mozilla to integrate Tor into Firefox.


Over the years, and more so since Cambridge Analytica scandal, Mozilla has taken a progressively tougher stance on user privacy. Firefox's Facebook Container extension, for example, makes it much harder for Facebook to  collect data from your browsing activities (yep, that's a thing. Facebook is tracking your every move on the web). The extension now includes Facebook Messenger and Instagram as well.

Firefox with Tor Integration

For starters, Tor is a free software and an open network for anonymous communication over the web. "Tor protects you by bouncing your communications around a distributed network of relays run by volunteers all around the world: it prevents somebody watching your Internet connection from learning what sites you visit, and it prevents the sites you visit from learning your physical location."

And don't confuse this project with Tor Browser, which is web browser with Tor's elements built on top of Firefox stable builds. Tor Browser in its current form has many limitations. Since it is based on Firefox ESR, it takes a lot of time and effort to rebase the browser with new features from Firefox's stable builds every year or so.

Enter 'Project Fusion'

Now that Mozilla has officially taken over the works of integrating Tor into Firefox through Project Fusion, things could change for the better. With the intention of creating a 'super-private' mode in Firefox that supports First Party Isolation (which prevents cookies from tracking you across domains), Fingerprinting Resistance (which blocks user tracking through canvas elements), and Tor proxy, 'Project Fusion' is aiming big. To put it together, the goals of 'Project Fusion' can be condescend into four points.
  • Implementing fingerprinting resistance, make more user friendly and reduce web breakage.
  • Implement proxy bypass framework.
  • Figure out the best way to integrate Tor proxy into Firefox.
  • Real private browsing mode in Firefox, with First Party Isolation, Fingerprinting Resistance, and Tor proxy.
As good as it sounds, Project Fusion could still be years away or may not happen at all given the complexity of the work. According to a Tor Project Developer at Mozilla:
"Our ultimate goal is a long way away because of the amount of work to do and the necessity to match the safety of Tor Browser in Firefox when providing a Tor mode. There's no guarantee this will happen, but I hope it will and we will keep working towards it."
As If you want to help, Firefox bugs tagged 'fingerprinting' in the whiteboard are a good place to start. Further reading at TOR 'Project Fusion' page.

22-06-2018

19:09

Germany says No to Public Cloud, Chooses Nextcloud's Open Source Solution [Tech Drive-in]

Germany's Federal Information Technology Centre (ITZBund) opts for an on-premise cloud solution which unlike those fancy Public cloud solutions, is completely private and under its direct control.

Germany's Open Source Migration

Given the recent privacy mishaps at some of biggest public cloud solution providers on the planet, it is only natural that government agencies across the world are opting for solutions that could provide users with more privacy and security. If the recent Facebook - Cambridge Analytica debacle is any indication, data vulnerability has become a serious national security concern for all countries. 

In light of these developments, government of Germany's IT service provider, ITZBund, has chosen Nextcloud as their cloud solutions partner. Nextcloud is a free and open source cloud solutions company based out of Europe that lets you to install and run its software on your private server. ITZBund has been running a pilot since 2016 with some 5000 users on Nextcloud's platform.
"Nextcloud is pleased to announce that the German Federal Information Technology Center (ITZBund) has chosen Nextcloud as their solution for efficient and secure file sharing and collaboration in a public tender. Nextcloud is operated by the ITZBund, the central IT service provider of the federal government, and made available to around 300,000 users. ITZBund uses a Nextcloud Enterprise Subscription to gain access to operational, scaling and security expertise of Nextcloud GmbH as well as long-term support of the software."
ITZBund employs about 2,700 people that include IT specialists, engineers and network and security professionals. After the successful completion of the pilot, a public tender was floated by ITZBund which eventually selected Nextcloud as their preferred partner. Nextcloud scored high on security requirements and scalability, which it addressed through its unique Apps concept.

31-05-2018

12:23

City of Bern Awards Switzerland's Largest Open Source Contract for its Schools [Tech Drive-in]

In another major win in a span of weeks for the proponents of open source solutions in EU, Bern, the capital of Switzerland, is pushing ahead with its plans to adopt open source tools as its software of choice for all its public schools. If all goes well, some 10,000 students in Switzerland schools could soon start getting their training using an IT infrastructure that is largely open source.

Switzerland's Largest Open Source deal

Over 10,000 Students to Benefit

Switzerland's largest open-source deal introduces a brand new IT infrastructure for the public schools of its capital city. The package includes Colabora Cloud Office, an online version of LibreOffice which is to be hosted in the City of Bern's data center, as its core component. Nextcloud, Kolab, Moodle, and Mahara are the other prominent open source tools included in the package. The contract is worth CHF 13.7 million over 6 years.

In an interview given to 'Der Bund', one of Switzerland's oldest news publications, open-source advocate Matthias Stürmer, EPP city council and IT expert, told that this is probably the largest ever open-source deal in Switzerland.

Many European countries are clamoring to adopt open source solutions for their cities and schools. From the recent German Federal Information Technology Centre's (ITZBund) selection of Nexcloud as their cloud solutions partner, to city of Turin's adoption of Ubuntu, to Italian Military's LibreOffice migration, Europe's recognition of open source solutions as a legitimate alternative is gaining ground.

Ironically enough, most of these software will run on proprietary iOS platform, as the clients given to students will be all Apple iPads. But hey, it had to start somewhere. When Europe's richest countries adopt open source, others will surely take notice. Stay tuned for updates. [via inside-channels.ch]

15-04-2018

16:47

LG Makes its webOS Operating System Open Source, Again! [Tech Drive-in]

Not many might remember HP's capable webOS. The open source webOS operating system was HP's answer to Android and iOS platforms. It was slick and very user-friendly from the start, some even considered it a better alternative to Android for Tablets at the time. But like many other smaller players, HP's webOS just couldn't find enough takers, and the project was abruptly ended and sold off of to LG.


The Open Source LG webOS

Under the 2013 agreement with HP Inc., LG Electronics had unlimited access to all webOS related documentation and source code. When LG took the project underground, webOS was still an open-source project.

After many years of development, webOS is now LG's platform of choice for its Smart TV division. It is generally considered as one of the better sorted Smart TV user interfaces. LG is now ready to take the platform beyond Smart TVs. LG has developed an open source version of its platform, called webOS Open Source Edition, now available to the public at webosose.org.

Dr. I.P. Park, CTO at LG Electronics had this to say, "webOS has come a long way since then and is now a mature and stable platform ready to move beyond TVs to join the very exclusive group of operating systems that have been successfully commercialization at such a mass level. As we move from an app-based environment to a web-based one, we believe the true potential of webOS has yet to be seen."

By open sourcing webOS, it looks like LG is gunning for Samsung's Tizen OS, which is also open source and built on top of Linux. In our opinion, device manufacturers preferring open platforms (like Automotive Grade Linux), over Android or iOS is a welcome development for the long-term health of the industry in general.

06-03-2018

19-09-2017

10:33

Embedded Linux Engineer [Job Openings]

You're eager to work with Linux in an exciting environment. You have a lot of PC equipement experience. Prior experience with embedded Linux or small footprint distributions is considered a plus. Region East/West Flanders

Linux Teacher [Job Openings]

We're looking for someone capable of teaching Linux and/or Solaris professionally. Ideally the candidate has experience with teaching in Linux, possibly other non-Windows OSes as well.

Kernel Developer [Job Openings]

We're looking for someone with kernel device driver developement experience. Preferably, but not necessary with knowledge of AV or TV devices.

C/C++ Developers [Job Openings]

We're searching Linux C/C++ Developers. Region Leuven.

Feeds

FeedRSSLast fetchedNext fetched after
Computable XML 19-01-2021, 18:43 19-01-2021, 21:43
GNOMON XML 19-01-2021, 18:43 19-01-2021, 21:43
http://www.h-online.com/news/atom.xml XML 19-01-2021, 18:43 19-01-2021, 21:43
http://www.h-online.com/open/atom.xml XML 19-01-2021, 18:43 19-01-2021, 21:43
Job Openings XML 19-01-2021, 18:43 19-01-2021, 21:43
Laatste Artikelen - Webwereld XML 19-01-2021, 18:43 19-01-2021, 21:43
linux blogs franz ulenaers XML 19-01-2021, 18:43 19-01-2021, 21:43
Linux Journal - The Original Magazine of the Linux Community XML 19-01-2021, 18:43 19-01-2021, 21:43
Linuxtoday.com XML 19-01-2021, 18:43 19-01-2021, 21:43
OMG! Ubuntu! XML 19-01-2021, 18:43 19-01-2021, 21:43
Planet Python XML 19-01-2021, 18:43 19-01-2021, 21:43
Press Releases – The Document Foundation Blog XML 19-01-2021, 18:43 19-01-2021, 21:43
Slashdot: Linux XML 19-01-2021, 18:43 19-01-2021, 21:43
Tech Drive-in XML 19-01-2021, 18:43 19-01-2021, 21:43
ulefr01 - blog franz ulenaers XML 19-01-2021, 18:43 19-01-2021, 21:43

Laatst gewijzigd: dinsdag 19 januari 2021 17:43
Copyright 2020 - Franz Ulenaers (email : franz.ulenaers@telenet.be)