Set, n-Tuple, Vector and Matrix — links and differences
Preliminary Notions:
I would like to start by mentioning the fact that the terms set, tuple, vector, and matrix, are fairly high level abstractions that have come to be linked to somewhat generic notions across multiple sub-fields of mathematics, physics, and computer science. As a result, the laymen definitions of these objects are widely available, while formal definitions remain difficult to ascertain. This is especially true if you're aim is to have these formal definitions all reside within the same formal system. This brings us to our first problem: The formal definition of any given mathematical object really only holds water in the axiomatic or formal system within which it is defined. For example, Wikipedia says that:
"In mathematics, an n-tuple is a sequence (or ordered list) of n elements, where n is a non-negative integer."
However, in many systems, a sequence $a_n$ is precisely defined as a total function $a:\mathbb{N}\to\mathbb{R}$. This definition of sequence, combined with the definition of tuple in the quote above, implies that every tuple has a countably infinite number of entries. This, of course, is not a useful definition of tuple. The problem here is that we are mixing and matching the operational definitions of objects from different formal systems. I will now describe one possible way (in terms of sets) of formally relating all of the objects you mentioned, and try to answer all of your questions.
Sets:
Sets are objects that contain other objects. If an object $a$ is contained in a set $A$, it is said to be an element or a member of $A$, and is denoted $a\in A$. Two sets are equal iff they have the same members. In other words, $$(A=B)\Leftrightarrow [(\forall a\in A)(a\in B)\land (\forall b\in B)(b\in A)].$$ This is really all there is to it, for all intents and purposes. Sets do not, themselves, have any higher level structure such as order, operations, or any other relations.
Tuples:
An n-tuple is a finite ordered list of elements. Two n-tuples are equal iff they have the same elements appearing in the same order. We denote them as $(a_1, a_2, ... , a_n)$. Given elements $a_1, a_2, ... , a_n, a_{n+1}$, n-tuples are inductively defined as follows:
$(a_1)\equiv\{a_1\}$ is a 1-tuple.
$(a_1, a_2)\equiv\{\{a_1\},\{a_1, a_2\}\}$ is a 2-tuple.
If $(a_1, ... , a_n)$ is an n-tuple, then $((a_1, ... , a_n), a_{n+1})$ is an (n+1)-tuple.
This construction satisfies the requirements for the properties of a tuple. It has been proven many times so I will not do so again here. However, as a side note I would like to entertain your inquiry into the extension of set-builder notation to the description of tuples.
Describing Sets of Tuples:
$A\equiv\{(x,\ y)\ |\ (x=y)\}$ is the set of all 2-tuples whose elements are equal. This is a trivial example of an equivalence relation.
$A\equiv\{(n,\ n+1)\ |\ (n\in \mathbb{N})\}$ is the set of all 2-tuples of consecutive natural numbers. This is a special type of order relation known as a cover relation.
$A\equiv\{(2x,\ 2y+1)\ |\ (x,y\in\mathbb{Z})\}$ is the set of all 2-tuples whose first element is an even integer and whose second element is an odd integer.
Cartesian Products and Sets of Tuples:
Let us define a set operation, called the Cartesian Product. Given sets $A$, $B$, $$A\times B\equiv\{(a,\ b)\ |\ (a\in A)\land(b\in B)\}.$$ This allows us to concisely describe sets of tuples from elements of other sets. The set of tuples from example 3 above can also be described as $E\times D$ where $E\equiv\{2x\ |\ (x\in\mathbb{Z})\}$ and $D\equiv\{2x+1\ |\ (x\in\mathbb{Z})\}$.
It is important to notice that the Cartesian product is not commutative (i.e. $A\times B\neq B\times A$) nor is it associative (i.e. $(A\times B)\times C\neq A\times(B\times C)$ ). From now on we will assume the convention that Cartesian products are left associative. That is, if no parenthesis are present, then $A\times B\times C=(A\times B)\times C$. Furthermore, multiple products of the same set can be abbreviated using exponent notation (i.e. $A\times A\times A\times A\times A = A^5$).
Vectors:
Oh, boy... Here we go! Okay, let's take a look at something you said about vectors:
"A vector is an element of a vector space . . . the objects of $\mathbb{R}^2$ are (column-)vectors which are denoted as tuples . . . box brackets [are used to] denote such vectors and the elements are written in one column . . . commata are not used to separate the objects (however, sometimes [are] . . . ) . . . I have never seen such notation when for instance describing elements of $\mathbb{N}\times\mathbb{R}$."
Our discussion of universes of discourse has just hit home in a real way, and by doing so, is causing some serious confusion (and reasonably so). You are right in saying that a vector is an element of a vector space, but may not be aware of the plethora of structural implications that sentence carries with it. But! Let's come back to that in a moment.
"the objects of $\mathbb{R}^2$ are (column-)vectors which are denoted as tuples"
Strictly speaking, this is not true. The elements of $\mathbb{R}^2$ are nothing more or less than 2-tuples with real valued entries, and $\mathbb{R}$ is simply a set, whose members we choose to call "the real numbers". Period. This is clearly shown by seeing that $\mathbb{R}^2=\mathbb{R}\times\mathbb{R}=\{(x,y)\ |\ (x\in\mathbb{R})\land(y\in\mathbb{R})\}$. Less strictly speaking, often when people write $\mathbb{R}$ they don't mean simply the set of real numbers, but the set of real numbers together with the standard addition and multiplication that constitutes an infinite ring with unity and the cancellation property, such that every nonzero element is a unit, which means that they constitute a field. Furthermore it is assumed that the completeness axiom, the axioms of order, and the absolute value we are all familiar with are present as well. Often when people write $\mathbb{R}^2$ they don't simply mean the set of real valued 2-tuples, but the 2-dimensional vector space over the field $\mathbb{R}$ with the Euclidean norm. What is a vector space over a field? A vector space over a field is a special case of a module over a ring. A field is an integral domain with every nonzero element being a unit. An integral domain is a ring with unity and the cancellation property. A ring is an abelian group under addition together with an associative, and distributive binary operation called multiplication. If you are not familiar with the notion of group, then we have delved too far down the rabbit hole.
(Inhale)
I suggest that you do not concern yourself with notational subtleties such as commas vs. no commas, square brackets vs. angle brackets vs. parenthesis, etc. These are, more often than not, used to simply convey contextual information. And do not worry if you have not heard some of the jargon above, you probably have an intuitive understanding (especially considering your inquiring into the deeper subtleties of the relationships of the objects in question) of what is going on, and you really just need to know that the important things are the operations. The thing that makes something a vector is not the presence or absence of commas, or the use of angle brackets. Still, it is useful in many domains to distinguish vectors from "points", or standard tuples, because it makes it easier to keep track of what objects have more structure applied on their underlying set. The reason you have probably never seen elements of $\mathbb{N}\times\mathbb{R}$ represented using the same notation as that used for vectors, is that $\mathbb{N}$ is not a field under standard operations, thus the direct product of that structure with the algebraic structure $\mathbb{R}$ is also not a field. If $\mathbb{N}\times\mathbb{R}$ isn't a field, then it has failed the very first thing required of it to have a vector space over it. Also, $\langle\mathbb{N},+\rangle$ isn't a group, so if vector addition is simply member-wise addition, then $\langle\mathbb{N}\times\mathbb{R},+\rangle$ is also not a group (another requirement). If it's not a vector space then its elements are not vectors, and will thus not be denoted as such.
Vector Spaces:
What makes something a vecter? If an object is an element of a vector space, then it is a vector. Given any field $F$ and set $V$, if $+:(V\times V)\to V$ and $\cdot:(F\times V)\to V$ are operations (called vector addition and scalar multiplication) such that $<V,\ + >$ is an abelian group and scalar multiplication distributes both over vector addition and scalar addition (the addition operation of the field $F$), and scalar multiplication associates with the multiplication of $F$, and lastly the unity of $F$ is an identity under scalar multiplication, then $V$ is a vector space over $F$, and any element of the set $V$ is called a vector. If an object is not an element of a vector space then it is not a vector. Period. Notice that this does not describe what vectors look like.
Surprising Example: $\mathbb{R}$ is a vector space over itself.
In general vectors are effectively represented by tuples, but making sense of them requires the context of the algebraic structure (vector space) within which vectors are defined. Thus a tuple representation, along with operations for how to manipulate/relate other tuples, is a satisfactory way to represent the algebraic structures known as vector spaces.
Matrices:
While matrices are often coupled with vector spaces, they are used for many purposes and are not defined in terms of vector spaces directly. Most treatments of "matrix theory" seem to simultaneously use set theoretic results but do not define matrices in terms of sets nor use sets as the object of study. As a result, this will be the object that will make it most difficult to intuitively see its relation to the others.
Like vectors, however, the thing that makes something a matrix, is the structure of which it is a part. A matrix contains elements that have both multiplication and addition operations defined on them. Then, the operations of matrix addition and matrix multiplication (as well as dot products, cross products, determinants, and various other things) are defined on the matrices in terms of the multiplication and addition operations of their entries. The usual definition of 'rectangular array' is not really helpful in the realm of sets, so I will provide an analogous definition.
Given some set $A$ over which addition and multiplication are defined, a $m$ by $n$ matrix with entries in $A$ is an element of $M_{m\times n}(A)\equiv (A^n)^m=A^{m\times n}$. Notice that, besides the quarky transposition of powers, we are simply using the regular Cartesian product here. The set of $3$ by $2$ matrices with Integer entries would look like this: $$M_{3\times 2}=(\mathbb{Z}^2)^3=(\mathbb{Z}^2)\times(\mathbb{Z}^2)\times(\mathbb{Z}^2)=(\mathbb{Z}\times\mathbb{Z})\times(\mathbb{Z}\times\mathbb{Z})\times(\mathbb{Z}\times\mathbb{Z}).$$ Supposing we may use the same indexing scheme as with regular tuples (I see no reason why not, this is simply a nested tuple) then we may refer to elements of a matrix as such: given $M$ is an $m$ by $n$ matrix, $M$ is an n-tuple whose entries are m-tuples. $M_1$ is the first row, $M_2$ is the second row, etc. Since $M_1$ is still a tuple, I can further index its elements: ${M_1}_1$ is the first element of the first row, ${M_1}_2$ is the second element of the first row, etc. Notice, that there comes a difficulty in concisely representing a single column, however. To get a column $k$ of an $m$ by $n$ matrix, I must define a m-tuple with the $k$th element of every row: $({M_1}_k, {M_2}_k, ... , {M_m}_k)$. I can then from here easily define all of the normal matrix operations in terms of tuples of tuples, and show that it is consistent with the matrix algebra you are used to. I could have just as easily chosen to represent the set of $m$ by $n$ matrices with entries in $A$ by the set $(A^m)^n$ and let $M_1$ be the first column and so forth, or even by $\mathbb{N}\times A^{mn}$, where an $m$ by $n$ matrix $M$ would be of the form $(n, ({M_1}_1, {M_1}_2, ... {M_1}_n, {M_2}_1, {M_2}_2, ... , {M_2}_n, ... , {M_m}_n))$. The natural number entry is required to distinguish an $m$ by $n$ from a $n$ by $m$ or any other matrix with the same total number of entries. In the end, it is all in how we define our operations that determines "what" something is. For example, if $F$ is a field, then the set $F^{m\times n}$ of $m$ by $n$ matrices with matrix addition is an abelian group, and scalar multiplication meets all the requirements for vector spaces, thus $F^{m\times n}$ with matrix addition and scalar multiplication is a vector space over the field $F$, even though people would not normally think of sets of matrices that are not "column" or "row" vectors as a vector space. These intricacies are often beyond the scope of the usual applications of matrices however, and the fact that they are not defined within most of the common foundational theories is usually left unscrutinized.
Closing Remarks:
I hope this shed some light on the subject. I think the take away is that each of this objects of study are linked to those generic notions we are all so familiar with. If you are in an applied field, then that is satisfactory in most cases. If you are in a field that places high importance on precise and rigorous argument within an axiomatic system, then well founded formal definitions are of the utmost importance and must be constructed, in terms of axioms or derived results, for each of mathematical structures you intend to use or study.
Often notations are used interchangeably with one another depending on context. They key point is context, and the level of care will vary considerably when looking at (say) a set theory text book versus a physics one. Due to conventions and/or canonical bijections we can shed these details in many situations.
Set. From an "applied"/less-rigorous perspective, a set is an unordered sequence of numbers. From a set theorists perspective, a set is just a collection of distinct objects, and (assuming you use axiomatic set theory as your mathematical foundation) everything is a set. Notation for sets vary but you touch on two common ones. $\{1,2,3\}$ is a set in which $1$, $2$ and $3$ are elements. The notation $\{x\in A:\varphi(x)\}$ where $\varphi$ is some (logical) formula which depends on $x$, and $A$ is the bounding set, e.g. $\{x\in\Bbb{N}: 1\le x\le 3\}$. Variations of these exist, however any mathematician would advise against using parentheses due to the ambiguity with ordered tuples...
$n$-tuples. From an applied viewpoint, an $n$-tuple is an ordered sequence of $n$ numbers. From a set theorists perspective, an $n$-tuple is made of nested ordered pairs: we need $n$ (not necessarily distinct sets) $A_1,\ldots,A_n$ and then say that $(a_1,\ldots,a_n)$ is a member of $A_1\times\dots\times A_n$ just in case $a_i\in A_i$ for all $i$, and this is an example of an $n$-tuple. There are different way to construct this set given the sets $A_i$, but one needs only to consider the $a_i\in A_i$ condition when they think of $n$-tuples (in most scenarios), since the constructions are equivalent. Of course, a special case is where all the $A_i$ are the same, in which case an $n$-tuple being an element of the set $X^n=X\times\dots\times X$ ($n$-times).
Vector. A vector is an element of a vector space, which is a set satisfying certain axioms. An example of a set is $\Bbb{R}^n$, i.e. ${\Bbb{R}\times\dots\times\Bbb{R}}$ ($n$-times), in which case the elements of the vector space are $n$-tuples where each component is in $\Bbb{R}$, as described above. Note We need not use the tuple notation for vectors in $\Bbb{R}^n$ if we do not want to, so for example $(0,\ldots, 0)$ is often written as just $0$, and we could just as well say $\vec{v}:=(0,1)$ is a vector. Moreover, $\Bbb{R}$ is a vector space, and the elements of $\Bbb{R}$ are not tuples, they are real numbers, and you should not use parentheses. Other examples include the set of bounded sequences, or the function space, both of which have elements that are not $n$-tuples.
Matrices. From an applied viewpoint, a matrix is a rectangular array of numbers. From a set-theoretic viewpoint, we may view an $m\times n$-matrix over a set $X$ as a function $A\colon \{1,\ldots,m\}\times\{1,\ldots,n\}\to X$, then define $A$ to be a matrix with element $a_{ij}:=A(i,j)$ at position $(i,j)$. This is similar to one of the definitions of an $n$-tuple, where an $n$-tuple is a function $x\colon \{1,\ldots, n\}\to X$ which has components $x_i:=x(i)$ at position $i$. Regarding notation, often we use square brackets, but parentheses are not uncommon... $$A ={\begin{bmatrix}a_{11}&\cdots &a_{1n}\\\vdots &\ddots &\vdots \\a_{m1}&\cdots &a_{mn}\end{bmatrix}}=\left({\begin{array}{rrrr}a_{11}&\cdots &a_{1n}\\ \vdots &\ddots &\vdots \\a_{m1}&\cdots &a_{mn}\end{array}}\right).$$
Confusions.
- From a set-theoretic perspective there is a distinction (using the construction I mention above) between $(x_1,\ldots,x_n)\in\Bbb{R}^n$ and $(x_1,\ldots,x_n)\in\Bbb{R}^{1\times n}$; they are different "objects" (sets). This is not the case in many applied topics; there is a canonical bijection between the two objects, so we lose nothing by ignoring these technicalities.
- However, in Linear Algebra we need to distinguish between row and column vectors in order for matrix multiplication to be well-defined. This distinction is made sometimes—but not always—by using square brackets for row and column vectors. Row vectors can be considered as functions $x\colon\{1\}\times\{1,\ldots,n\}\to X$ and column vectors as functions $y\colon\{1,\ldots,m\}\times\{1\}\to X$,that is, they are matrices with one "dimension" being $1$. Due to a canonical bijection $x_{1k}\mapsto x_k$ we can view these functions as both being elements in $X^n$—e.g. a function $x\colon\{1,\ldots,n\}\to X$—without confusion.
- Coordinates. One should make the distinction between coordinate vectors and $n$-tuples. Given an ordered basis $B=\{e_1,\ldots,e_n\}$ of a finite dimensional vector space $V$ over a field $F$, a coordinate vector $(\alpha_1,\ldots,\alpha_n)$, with $\alpha_i\in F$, is a representation of $$v=\alpha_1 e_1+\dots+\alpha_n e_n\in V.$$ Te notation for a coordinate vector representing $v$ given a basis $B$ is $[v]_B=(\alpha_1,\ldots,\alpha_n)$, or sometimes $(v)_B$. Coordinates were alluded to in the comment section in an answer Frank Vel linked to, whereby although an arbitrary vector space may not have its elements as "ordered tuples", we can still represent them with $n$-tuples. Again, in $\Bbb{R}^n$, due to a canonical bijection, we lose nothing by using the same notation, working on the convention that we use the standard basis unless specified otherwise. One should be able to tell by context what $(\alpha_1,\ldots,\alpha_n)$ represents: if it is a member of some set $X^n$ it is likely an $n$-tuple; or if it is a member of some vector space with an ordered basis $B$, it is likely a coordinate vector. The notations are the same, however. (Coordinate vectors also generalize for infinite-dimensional spaces.)
- Further discussion on the difference between coordinates and components (in $n$-tuples) can be found in this question.
- Annoyingly open intervals (in $\Bbb{R}$) use the same notation as ordered pairs, but this is rarely a problem, because of context.
As Frank says in the comments, without context $(a,b,c)$ could indeed be any one of a tuple, a vector, or a matrix. For most intents and purposes, a $1\times n$ matrix is the same as an $n$-tuple, though; so the question is reduced to "is it a tuple or a vector?". The answer depends on the bounding set—is it a vector space with some specified basis, or is it $X^n$ for some set $X$?—and context.
Let me know if there are any outstanding confusions and I will add to my list.