[This article was first published on pacha.dev/blog, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. Because of delays with my scholarship payment, if this post is useful to you I kindly ask a minimal donation on Buy Me a Coffee. It shall be used to continue my Open Source efforts. The full explanation is here: A Personal Message from an Open Source Contributor.Continuing with the previous Selenium post, now I will process each job offer and organize its contents.This requires the readxl package to read XLSX files:if (!require(readxl)) install.packages("readxl")To read the XLSX from part2 and start reading each offer I start with:library(RSelenium)library(rvest)library(dplyr)library(purrr)library(writexl)library(readxl)offers_tbl % summarise( across( everything(), list( na_count = ~sum(is.na(.)) ), .names = "{.col}_{.fn}" ) )which shows that all the blank values correspond to the same observations:# A tibble: 1 × 6 title_na_count institution_na_count positions_na_count city_na_count 1 187 187 187 187# ℹ 2 more variables: compensation_na_count , education_na_count I got 547 - 187 = 360 well organized observations with a scraping process that took around five minutes. Not bad!This needs an XLSX backup to avoid scraping twice:write_xlsx(descriptions_tbl, "descriptions_20250821.xlsx")I hope this was useful. In the next parts I will cover some analysis and plots with this data. To leave a comment for the author, please follow the link and comment on their blog: pacha.dev/blog.R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.Continue reading: Step-by-Step Guide to Use R and Selenium to Scrape Empleos Publicos (Part 3)